I am using 'rPython' package for calling python within R but I am unable to make R refer to my python's virtual environment.
In R, I have tried using
system('. /home/username/Documents/myenv/env/bin/activate')
but after running the above my python library path does not change (which I check via python.exec(print sys.path)). When I run
python.exec('import nltk')
I am thrown the error:
Error in python.exec("import nltk") : No module named nltk
although it is there in my virtual env.
I am using R 3.0.2, Python 2.7.4 on Ubuntu 13.04.
Also, I know I can change the python library path from within R by using
python.exec("sys.path='\your\path'")
but I don't want this to be entered manually over and over again whenever a new python package is installed.
Thanks in advance!
Use the "activate" bash script before running R, so that the R process inherits the changed environment variables.
$ source myvirtualenv/bin/activate
$ R
Now rPython should be able to use the packages in your virtualenv.
Works for me. May behave strangely if the Python version you made the virtualenv with is different to the one rPython links into the R process.
Expanding on #PaulHarrison's answer, you can mimic what .../activate is doing directly in the environment (before starting python from R).
Here's one method for determining what vars are modified:
$ set > pyenv-pre
$ . /path/to/venv/activate
(venvname) $ set > pyenv-post
(venvname) $ diff -uw pyenv-pre pyenv-post
This gave me something like:
--- pyenv-pre 2018-12-02 15:16:43.093203865 -0800
+++ pyenv-post 2018-12-02 15:17:34.084999718 -0800
## -33,10 +33,10 ##
OPTERR=1
OPTIND=1
OSTYPE=linux-gnu
-PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin
+PATH=/path/to/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin
PIPESTATUS=([0]="0")
PPID=325990
-PS1='\[\e]0;\u#\h: \w\a\]${debian_chroot:+($debian_chroot)}\u#\h:\w\$ '
+PS1='(venvname) \[\e]0;\u#\h: \w\a\]${debian_chroot:+($debian_chroot)}\u#\h:\w\$ '
PS2='> '
PS4='+ '
PWD=/
## -50,10 +50,13 ##
TERM=xterm
UID=3000019
USER='helloworld'
+VIRTUAL_ENV=/path/to/venv
XDG_DATA_DIRS=/usr/local/share:/usr/share:/var/lib/snapd/desktop
XDG_RUNTIME_DIR=/run/user/3000019
XDG_SESSION_ID=27577
-_=set
+_=/path/to/venv/bin/activate
+_OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin
+_OLD_VIRTUAL_PS1='\[\e]0;\u#\h: \w\a\]${debian_chroot:+($debian_chroot)}\u#\h:\w\$ '
__git_printf_supports_v=yes
__grub_script_check_program=grub-script-check
_backup_glob='#(#*#|*#(~|.#(bak|orig|rej|swp|dpkg*|rpm#(orig|new|save))))'
## -2390,6 +2393,31 ##
fi;
fi
}
+deactivate () ... rest of this function snipped for brevity
So it appears that the important envvars to update are:
PATH: prepend the venv bin directory to the existing paths
VIRTUAL_ENV: set to /path/to/venv
I believe the other changes (OLD_VIRTUAL_* and deactivate () ...) are optional and really only used to back-out the venv activation.
Looking at the .../activate script verifies these are most of the steps taken. Another step is unset PYTHONHOME if set, which may not be shown in the diff above if you didn't have it set previously.
To R-ize this:
Sys.setenv(
PATH = paste("/path/to/venv/bin", Sys.getenv("PATH"), sep = .Platform$path.sep),
VIRTUAL_ENV = "/path/to/venv"
)
Sys.unsetenv("PYTHONHOME") # works whether previously set or not
I've had luck getting scripts to use my peynv installation by using:
#!/usr/bin/env python
So maybe try pointing R to that path (sans #!, of course).
manage to get it working by using bash -c:
system("/bin/bash -c \"source ./pydatatable/py-pydatatable/bin/activate && python -c 'import datatable as dt; print(dt.__version__)'\"")
Related
I am trying to run some code inside a server. In that server, we use docker images to create notebooks inside directories, with commands like:
docker run -it --gpus "device=1" -p 8886:8886 -v /folder/directory:/workspace/work --name container-name --shm-size=60g --ulimit memlock=-1 --ulimit stack=67108864 --rm imageid jupyter notebook --port=8886 --ip=0.0.0.0 --allow-root --no-browser
Once created the notebook with an image, we have two different environments with two different python versions in the folder that were designed to execute the code inside /folder/directory: venv3.6 and venv3.7.
Even if I didn't create them, I am confident that the environments worked at some point (there are checkpoints obtained from the execution of the code by a colleague that worked on it before me). However, it must have been messed up with at some point, maybe after some modifications on the libraries of the docker image.
The problem is that, whenever I try to activate venv3.7 by using source ./venv3.7/bin/activate and run a script with python script_name.py, the python version that is executed is not 3.7, but rather 3.6.10. When going into /venv3.7/bin/activate and trying to access or download the python, the python3 or the python3.7 files, they cannot be accessed, moved or activated (i.e., if I enter /venv3.7/bin/python3.7 on the terminal, I obtain the file not found error).
When the environment is activated:
root#XXXX:/workspace/work/path# which python
/opt/conda/bin/python
root#XXXX:/workspace/work/path# source ./venv3.7/bin/activate
(venv3.7) root#XXXX:/workspace/work/path#
Following this stackoverflow post, I make the following comprobations
(venv3.7) root#XXXX:/workspace/work/path# python -V
Python 3.6.10 :: Anaconda, Inc.
(venv3.7) root#XXXX:/workspace/work/path# echo $PATH
/workspace/work/path/venv3.7/bin:/usr/local/nvm/versions/node/v15.0.1/bin:/opt/conda/bin:/opt/cmake-3.14.6-Linux-x86_64/bin/:/usr/local/mpi/bin:/usr/local/nvidia/bin:/usr/local/cuda/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/local/ucx/bin:/opt/tensorrt/bin
(venv3.7) root#XXXX:/workspace/work/path# which python
/opt/conda/bin/python
(venv3.7) root#XXXX:/workspace/work/path# alias
alias egrep='egrep --color=auto'
alias fgrep='fgrep --color=auto'
alias grep='grep --color=auto'
alias l='ls -CF'
alias la='ls -A'
alias ll='ls -alF'
alias ls='ls --color=auto'
Which shows that the path is added correctly and there is no alias for python that could be messing with the activation but, still, python command uses the version from /opt/conda/bin/python instead of /workspace/work/path/venv3.7/bin
I also have checked that the path VIRTUAL_ENV in activate script (venv3.7/bin/activate) is correct.
I noticed that the directory: /venv3.7/pyvenv.cfg contains:
home = /usr/bin
include-system-site-packages = false
version = 3.7.5
And when I go to the directory /usr/bin, which should contain the python in which the environment is based, it only has python2.7 files. Could that mean that, when the first directory in $PATH is followed, no valid version of Python is found?
My guess is that the python (python, python3, python3.7) files were symlinks that were broken because the python version changed in /usr/bin. However, I don't want to risk to update the version of python in that directory, because it would probably change the default python in /opt/conda/bin/python instead, and I don't know much about docker images. Do you think it would work? In that case, how would I do it?
As additional info, the python files inside venv3.6/bin seems to work well (it can be executed and copied), but maybe because /venv3.6/pyvenv.cfg leads to the default python instead (in /opt/conda/bin/python). Also, after asking the original creator of the code, she doesn't know how to solve this issue either.
I need the environment to work, and recreating it is problematic, since many libraries were downloaded from different places (it was delicate work).
What do you suggest?
EDIT
I have tried recreating the environment with the python version I need (3.7.5). Do you know of an easy way to install the same libraries than in the other environment, considering that I can't activate it?
I was thinking to use the folders with the libraries located in /venv3.7/lib, but It is not straight forward. Any idea on how to do it?
Also, would you recommend me to create the new environment with virtualenv (to have a separate python version) or, rather, with anaconda?
Thank you so much for reading me!
After checking the python3.7 file in the environment:
root#XXXX:/# cd workspace/work/path/venv3.7/bin
root#XXXX:/workspace/work/path/venv3.7/bin# stat python
File: python -> python3.7
Size: 9 Blocks: 0 IO Block: 4096 symbolic link
Device: XXXX Inode: XXXX Links: 1
Access: (XXXX) Uid: ( 0/ root) Gid: ( 0/ root)
Access: 2022-12-06 10:31:18.165001523 +0000
Modify: 2022-05-20 12:28:37.481538688 +0000
Change: 2022-05-20 12:28:37.481538688 +0000
Birth: -
root#XXXX:/workspace/work/path/venv3.7/bin# stat python3.7
File: python3.7 -> /usr/bin/python3.7
Size: 18 Blocks: 0 IO Block: 4096 symbolic link
Device: XXXX Inode: XXXX Links: 1
Access: (XXXX) Uid: ( 0/ root) Gid: ( 0/ root)
Access: 2022-12-06 10:31:18.165001523 +0000
Modify: 2022-05-20 12:28:37.481538688 +0000
Change: 2022-05-20 12:28:37.481538688 +0000
Birth: -
It became obvious that, as stated in the post, /usr/bin should be the directory where python3.7 should be installed. That means the problem could be solved by installing it in that folder.
As I didn't know that was the default folder for the Python installation, I tried installing python from source as exposed in several guides. However, even if now the environment started accessing python3.7 in the folder, that installation didn't work either.
So I just tried apt-get install python3.7. It took around 10 seconds and, when I tried the code again, it worked!
Next time, when your environments fails because the wrong python version is executed, and the aliases and $PATH are right (see this post for more details), just remember to check where the python files in the environment point to and verify that the python installation is correct!
I hope this is useful for you.
I have all playbooks in /etc/ansible/playbooks and I want to execute them anywhere on the pc
I tried to configure playbook_dir variable in ansible.cfg
[defaults]
playbook_dir = /etc/ansible/playbooks/
and tried to put ANSIBLE_PLAYBOOK_DIR variable in ~/.bashrc
export ANSIBLE_PLAYBOOK_DIR=/etc/ansible/playbooks/
but I only got the same error in both cases:
nor#nor:~$ ansible-playbook test3.yaml
ERROR! the playbook: test3.yaml could not be found
This is my ansible version:
ansible 2.9.7
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/nor/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python3.7/dist-packages/ansible
executable location = /usr/local/bin/ansible
python version = 3.7.3 (default, Oct 7 2019, 12:56:13) [GCC 8.3.0]
Does anyone know the problem and how to solve it?
According to https://manpages.debian.org/testing/ansible/ansible-inventory.1.en.html :
--playbook-dir 'BASEDIR'
Since this tool does not use playbooks, use this as a subsitute playbook directory.This sets the relative path for many features including roles/ group_vars/ etc.
This means that ANSIBLE_PLAYBOOK_DIR is not used as a replacement for specifying the the absolute / relative path to your playbook, but it tells the playbook where it should look for roles, host/group vars , etc.
The goal you're trying to achieve is has no solution on the ansible side, you need to achieve this by configuring your shell profile accordingly.
set the following in your .bashrc file:
export playbooks_dir=/path/to/playbooks
when you call the playbook use ansible-playbook $playbooks_dir/test3.yml
As others have said, ANSIBLE_PLAYBOOK_DIR is for setting the relative directory for roles/, files/, etc. IMHO, it's not terribly useful.
If I understand the op, this is how I accomplish a similar result with all versions of ansible ...
PPWD=$PWD cd /my/playbook/dir && ansible-playbook my_playbook.yml; cd $PPWD
Explained,
PPWD=$PWD is to remember the current/present/previous working directory, then
cd /my/playbook/dir and if that succeeds run ansible-playbook my_playbook.yml (everything is relative from there); regardless, always change back to the previous working directory
PLAYBOOK_DIR says:
"A number of non-playbook CLIs have a --playbook-dir argument; this sets the default value for it."
Unfortunately, there is no hint in the doc what "the non-playbook CLIs" might be. ansible-playbook isn't one of them, obviously.
FWIW. If you're looking for a command-line oriented framework try ansible-runner. For example, export the location of private_data_dir
shell> export ansible_private=/path/to/<private-data-dir>
Then run the playbook
shell> ansible-runner -p playbook.yml run $ansible_private
When a Python script is supposed to be run from a pyenv virtualenv, what is the correct shebang for the file?
As an example test case, the default Python on my system (OS X) does not have pandas installed. The pyenv virtualenv venv_name does. I tried getting the path of the Python executable from the virtualenv.
pyenv activate venv_name
which python
Output:
/Users/username/.pyenv/shims/python
So I made my example script.py:
#!/Users/username/.pyenv/shims/python
import pandas as pd
print 'success'
But when I tried running the script (from within 'venv_name'), I got an error:
./script.py
Output:
./script.py: line 2: import: command not found
./script.py: line 3: print: command not found
Although running that path directly on the command line (from within 'venv_name') works fine:
/Users/username/.pyenv/shims/python script.py
Output:
success
And:
python script.py # Also works
Output:
success
What is the proper shebang for this? Ideally, I want something generic so that it will point at the Python of whatever my current venv is.
I don't really know why calling the interpreter with the full path wouldn't work for you. I use it all the time. But if you want to use the Python interpreter that is in your environment, you should do:
#!/usr/bin/env python
That way you search your environment for the Python interpreter to use.
As you expected, you should be able to use the full path to the virtual environment's Python executable in the shebang to choose/control the environment the script runs in regardless of the environment of the controlling script.
In the comments on your question, VPfB & you find that the /Users/username/.pyenv/shims/python is a shell script that does an exec $pyenv_python. You should be able to echo $pyenv_python to determine the real python and use that as your shebang.
See also: https://unix.stackexchange.com/questions/209646/how-to-activate-virtualenv-when-a-python-script-starts
Try pyenv virtualenvs to find a list of virtual environment directories.
And then you might find a using shebang something like this:
#!/Users/username/.pyenv/python/versions/venv_name/bin/python
import pandas as pd
print 'success'
... will enable the script to work using the chosen virtual environment in other (virtual or not) environments:
(venv_name) $ ./script.py
success
(venv_name) $ pyenv activate non_pandas_venv
(non_pandas_venv) $ ./script.py
success
(non_pandas_venv) $ . deactivate
$ ./script.py
success
The trick is that if you call out the virtual environment's Python binary specifically, the Python interpreter looks around that binary's path location for the supporting files and ends up using the surrounding virtual environment. (See per *How does virtualenv work?)
If you need to use more shell than you can put in the #! shebang line, you can start the file with a simple shell script which launches Python on the same file.
#!/bin/bash
"exec" "pyenv" "exec" "python" "$0" "$#"
# the rest of your Python script can be written below
Because of the quoting, Python doesn't execute the first line, and instead joins the strings together for the module docstring... which effectively ignores it.
You can see more here.
To expand this to an answer, yes, in 99% of the cases if you have a Python executable in your environment, you can just use:
#!/usr/bin/env python
However, for a custom venv on Linux following the same syntax did not work for me since the venv created a link to the Python interpreter which the venv was created from, so I had to do the following:
#!/path/to/the/venv/bin/python
Essentially, however, you are able to call the Python interpreter in your terminal. This is what you would put after #!.
It's not exactly answering the question, but this suggestion by ephiement I think is a much better way to do what you want. I've elaborated a bit and added some more of an explanation as to how this works and how you can dynamically select the Python executable to use:
#!/bin/sh
#
# Choose the Python executable we need. Explanation:
# a) '''\' translates to \ in shell, and starts a python multi-line string
# b) "" strings are treated as string concatenation by Python; the shell ignores them
# c) "true" command ignores its arguments
# c) exit before the ending ''' so the shell reads no further
# d) reset set docstrings to ignore the multiline comment code
#
"true" '''\'
PREFERRED_PYTHON=/Library/Frameworks/Python.framework/Versions/2.7/bin/python
ALTERNATIVE_PYTHON=/Library/Frameworks/Python.framework/Versions/3.6/bin/python3
FALLBACK_PYTHON=python3
if [ -x $PREFERRED_PYTHON ]; then
echo Using preferred python $ALTERNATIVE_PYTHON
exec $PREFERRED_PYTHON "$0" "$#"
elif [ -x $ALTERNATIVE_PYTHON ]; then
echo Using alternative python $ALTERNATIVE_PYTHON
exec $ALTERNATIVE_PYTHON "$0" "$#"
else
echo Using fallback python $FALLBACK_PYTHON
exec python3 "$0" "$#"
fi
exit 127
'''
__doc__ = """What this file does"""
print(__doc__)
import platform
print(platform.python_version())
If you want just a single script with a simple selection of your pyenv virtualenv, you may use a Bash script with your source as a heredoc as follows:
#!/bin/bash
PYENV_VERSION=<your_pyenv_virtualenv_name> python - $# <<EOF
import sys
print(sys.argv)
exit
EOF
I did some additional testing. The following works too:
#!/usr/bin/env -S PYENV_VERSION=<virtual_env_name> python
/usr/bin/env python won't work, since it doesn't know about the virtual environment.
Assuming that you have main.py living next to a ./venv directory, you need to use Python from the venv directory. Or in other words, use this shebang:
#!venv/bin/python
Now you can do:
./main.py
Maybe you need to check the file privileges:
sudo chmod +x script.py
I have a script that drives installation of a lot of packages. In one place, it uses pip. One of the packages requires it's own special command-line argument for the build process.
pip enables install options to be passed in to the build process as follows:
pip install -U --timeout 30 $options --install-option='--hdf5=/usr/local/hdf5' tables
--install-option is an argument to pip. The value it is set to, --hdf5=/usr/local/hdf5, will be passed on to the build process. So, the single quotes have to be there to group all of the text as one argument that follows the equal sign. You might say I could just use double quotes to surround the value of the install-option. Well, at the command line I could.
But, here is the added complication. This is in a script. The parameter values for the pip command are passed to a function in an array. The array entry for this package looks like:
("tables,pip,,--install-option='--hdf5=/usr/local/hdf5'")
The receiving function parses the array entry with set as in this fragment:
IFS="," # to split apart the pkg array entries
for pkg in "${pkglist[#]}"; do
set -- ${pkg}
if [[ "$2" == "pip" ]]; then # $1 is pkg, $2 is pip, $3 is url, $4 is options
DoPip $1 $3 $4
...
So, DoPip, for this package, is seeing: DoPip tables '' --install-option='--hdf5=/usr/local/hdf5'
The problem occurs in DoPip. I can't figure out how to expand the last argument when I need to run pip itself. I have done a bunch of debugging to see what happens. What happens is that the value of $3 is simply being dropped--it just disappears. It will echo in a string, but it will not work as part of a command.
Looking at the function DoPip. To help debug, I reassign the arguments to explicit variables. It's not necessary, but helped make sure there weren't stupid mistakes on my part.
DoPip() {
# run pip command to install packages
# arguments: 1: package-name 2: optional source <URL>
# 3: optional pip options
pkgname=$1
url=$2
options=$3
Next, I set a variable source to be either the pkgname or the url, if the url is non-blank. I am skipping this fragment--it works.
To debug, I echo the reassigned arguments:
echo "1. The inbound arguments are: $pkgname $url $options"
The output LOOKS like it ought to work:
The inbound arguments are: tables --install-option='--hdf5=/usr/local/hdf5'
Here is the statement that actually runs pip with these arguments:
pip install -U --timeout 30 $options $source
With debugging on, here is what Bash actually sees and runs:
+ pip install -U --timeout 30 tables
Whoa! What happened to $options? It's GONE! In fact, immediately prior to this statement I repeat the echo to verify that no intervening part of the script caused the value to get flushed. Not a problem. I can echo the value of $options immediately prior--it's ok. Then, it's gone.
I can't figure out what is happening or how to do this. I have tried various ways of escaping the single quotes in the array where the string literal is originally created based on reading how very special single quotes are. Nothing works. The whole variable expansion just goes away.
I have tried doing the expansion in various ways:
pip install -U --timeout 30 "$options" $source
That doesn't work. The string in options appears but surrounded by single quotes so the pip command throws an error. Next, I tried:
pip install -=U --timeout 30 "${options}" $source
also fails: single quotes and the curly braces appear and pip is unhappy again.
The --install-options argument is essential. The build fails without it.
There has to be some way to do this. Any suggestions?
$ bash -version
GNU bash, version 4.2.25(1)-release (x86_64-pc-linux-gnu)
Copyright (C) 2011 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>
This script gave the following output:
#!/bin/bash -vx
options=--install-option='--hdf5=/usr/local/hdf5'
source=tables
pip install -U --timeout 30 $options $source
$ ./script.sh
#!/bin/bash -vx
options=--install-option='--hdf5=/usr/local/hdf5'
+ options=--install-option=--hdf5=/usr/local/hdf5
source=tables
+ source=tables
pip install -U --timeout 30 $options $source
+ pip install -U --timeout 30 --install-option=--hdf5=/usr/local/hdf5 tables
Downloading/unpacking tables
Downloading tables-3.1.1.tar.gz (6.7MB): 6.7MB downloaded
Running setup.py (path:/tmp/pip_build_ankur/tables/setup.py) egg_info for package tables
* Using Python 2.7.3 (default, Feb 27 2014, 19:58:35)
* Found numpy 1.6.1 package installed.
.. ERROR:: You need numexpr 2.0.0 or greater to run PyTables!
Complete output from command python setup.py egg_info:
* Using Python 2.7.3 (default, Feb 27 2014, 19:58:35)
* Found numpy 1.6.1 package installed.
.. ERROR:: You need numexpr 2.0.0 or greater to run PyTables!
----------------------------------------
Cleaning up...
Command python setup.py egg_info failed with error code 1 in /tmp/pip_build_ankur/tables
Storing debug log for failure in /home/ankur/.pip/pip.log
Ignore the errors, I am seeing the value of $options being correctly read and passed to the shell.
All of this was embedded in a very large script. I created a shorter version to focus on the problem. In the end, I adopted a different approach relying on the environment variable HDF5_DIR rather than the cmd line switch for pip.
Apparently, you can only do one or the other. In any case, in the cleaned up code the argument was passed, but that is an error if the environment variable is present. Go figure.
I have the shorter code if anyone is interested.
When banging head too hard, step away from the wall and find another approach. I'll call this one closed.
The installer for the python package tables (aka, potables) finds the hdf5 libraries either with a build switch supplied to pip or by setting an environment variable before starting the build. It was easier/more reliable in the bash script to simply create the environment variable immediately before starting to build tables. There is no need to put it in bash_profile. The environment variable only needs to exist in the shell where tables is built. Once tables is installed, there is no further need for the environment variable.
In this code fragment, pkglist is an array of modules to be installed and InstallPackages is a function that walks the array and calls the appropriate installer: pip in this case.
if [[ "$hdf5exists" == "True" ]]; then
# assert hdf libraries installed
export HDF5_DIR=/usr/local/hdf5
pkglist=("${hdf5_pkglist[#]}") # only install Python wrappers for hdf5
InstallPackages
fi
So, this is the solution I adopted.
I've been usually installed python packages through pip.
For Google App Engine, I need to install packages to another target directory.
I've tried:
pip install -I flask-restful --target ./lib
but it fails with:
must supply either home or prefix/exec-prefix -- not both
How can I get this to work?
Are you using OS X and Homebrew? The Homebrew python page https://github.com/Homebrew/brew/blob/master/docs/Homebrew-and-Python.md calls out a known issue with pip and a work around.
Worked for me.
You can make this "empty prefix" the default by adding a
~/.pydistutils.cfg file with the following contents:
[install]
prefix=
Edit: The Homebrew page was later changed to recommend passing --prefix on the command line, as discussed in the comments below. Here is the last version which contained that text. Unfortunately this only works for sdists, not wheels.
The issue was reported to pip, which later fixed it for --user. That's probably why the section has now been removed from the Homebrew page. However, the problem still occurs when using --target as in the question above.
I believe there is a simpler solution to this problem (Homebrew's Python on macOS) that won't break your normal pip operations.
All you have to do is to create a setup.cfg file at the root directory of your project, usually where your main __init__.py or executable py file is. So if the root folder of your project is: /path/to/my/project/, create a setup.cfg file in there and put the magic words inside:
[install]
prefix=
OK, now you sould be able to run pip's commands for that folder:
pip install package -t /path/to/my/project/
This command will run gracefully for that folder only. Just copy setup.cfg to whatever other projects you might have. No need to write a .pydistutils.cfg on your home directory.
After you are done installing the modules, you may remove setup.cfg.
On OSX(mac), assuming a project folder called /var/myproject
cd /var/myproject
Create a file called setup.cfg and add
[install]
prefix=
Run pip install <packagename> -t .
Another solution* for Homebrew users is simply to use a virtualenv.
Of course, that may remove the need for the target directory anyway - but even if it doesn't, I've found --target works by default (as in, without creating/modifying a config file) when in a virtual environment.
*I say solution; perhaps it's just another motivation to meticulously use venvs...
I hit errors with the other recommendations around --install-option="--prefix=lib". The only thing I found that worked is using PYTHONUSERBASE as described here.
export PYTHONUSERBASE=lib
pip install -I flask-restful --user
this is not exactly the same as --target, but it does the trick for me in any case.
As other mentioned, this is known bug with pip & python installed with homebrew.
If you create ~/.pydistutils.cfg file with "empty prefix" instruction it will fix this problem but it will break normal pip operations.
Until this bug is officially addressed, one of the options would be to create your own bash script that would handle this case:
#!/bin/bash
name=''
target=''
while getopts 'n:t:' flag; do
case "${flag}" in
n) name="${OPTARG}" ;;
t) target="${OPTARG}" ;;
esac
done
if [ -z "$target" ];
then
echo "Target parameter must be provided"
exit 1
fi
if [ -z "$name" ];
then
echo "Name parameter must be provided"
exit 1
fi
# current workaround for homebrew bug
file=$HOME'/.pydistutils.cfg'
touch $file
/bin/cat <<EOM >$file
[install]
prefix=
EOM
# end of current workaround for homebrew bug
pip install -I $name --target $target
# current workaround for homebrew bug
rm -rf $file
# end of current workaround for homebrew bug
This script wraps your command and:
accepts name and target parameters
checks if those parameters are empty
creates ~/.pydistutils.cfg file with "empty prefix" instruction in it
executes your pip command with provided parameters
removes ~/.pydistutils.cfg file
This script can be changed and adapted to address your needs but you get idea. And it allows you to run your command without braking pip. Hope it helps :)
If you're using virtualenv*, it might be a good idea to double check which pip you're using.
If you see something like /usr/local/bin/pip you've broken out of your environment. Reactivating your virtualenv will fix this:
VirtualEnv: $ source bin/activate
VirtualFish: $ vf activate [environ]
*: I use virtualfish, but I assume this tip is relevant to both.
I have a similar issue.
I use the --system flag to avoid the error as I decribe here on other thread where I explain the specific case of my situation.
I post this here expecting that can help anyone facing the same problem.