Auto-activate conda env when changing directory - python

I have few conda environments that I use in different projects, say:
ml37 (for machine learning)
etl37 (for data pipelines)
I have local projects organized in their own directories:
apps/some_app
apps/other_app
...
Each time I cd to a specific project, I already know which env I would like to use. So I end up doing conda activate [some env] each time I change directories. I feel like there must be a better way.
What would be a clean way to automatize this?
Or is my use of conda environments wrong?

I made a script similar to the one of Corey Chafer, but this one extends the cd command.
cd() { builtin cd "$#" &&
if [ -f $PWD/.conda_config ]; then
export CONDACONFIGDIR=$PWD
conda activate $(cat .conda_config)
elif [ "$CONDACONFIGDIR" ]; then
if [[ $PWD != *"$CONDACONFIGDIR"* ]]; then
export CONDACONFIGDIR=""
conda deactivate
fi
fi }
Put this few lines of code at the bottom of your shell profile and then create a .conda_config file inside the directory you want to activate the env for.
The .conda_config file must contain only the env name.
In this way, every time you cd into a directory that has a .conda_config file, the script will activate the env, and every time you cd out it will deactivate.
I created a repo for reference Conda-autoactivate-env
EDIT:
There was a bug in the elif condition.
Basically [-n $CONDACONFIGDIR] returns always True and its logic is backwards actually.
The fix is:
quote the variable (or use double squared brackets) and remove -n
[ "$CONDACONFIGDIR" ] OR [[ $CONDACONFIGDIR ]].
The above code is already up-to-date!

Related

Python PATH is taken over by anaconda even I've modified .bash_profile [duplicate]

I recently installed anaconda2 on my Mac. By default Conda is configured to activate the base environment when I open a fresh terminal session.
I want access to the Conda commands (i.e. I want the path to Conda added to my $PATH which Conda does when initialised so that's fine).
However I don't ordinarily program in python, and I don't want Conda to activate the base environment by default.
When first executing conda init from the prompt, Conda adds the following to my .bash_profile:
# >>> conda initialize >>>
# !! Contents within this block are managed by 'conda init' !!
__conda_setup="$('/Users/geoff/anaconda2/bin/conda' 'shell.bash' 'hook' 2> /dev/null)"
if [ $? -eq 0 ]; then
eval "$__conda_setup"
else
if [ -f "/Users/geoff/anaconda2/etc/profile.d/conda.sh" ]; then
. "/Users/geoff/anaconda2/etc/profile.d/conda.sh"
else
export PATH="/Users/geoff/anaconda2/bin:$PATH"
fi
# fi
unset __conda_setup
# <<< conda initialize <<<
If I comment out the whole block, then I can't activate any Conda environments.
I tried to comment out the whole block except for
export PATH="/Users/geoff/anaconda2/bin:$PATH"
But then when I started a new session and tried to activate an environment, I got this error message:
CommandNotFoundError: Your shell has not been properly configured to use 'conda activate'.
This question (and others like it) are helpful, but doesn't ultimately answer my question and is more suited for linux users.
To be clear, I'm not asking to remove the (base) from my $PS1 I'm asking for Conda not to activate base when I open a terminal session.
I have conda 4.6 with a similar block of code that was added by conda. In my case, there's a conda configuration setting to disable the automatic base activation:
conda config --set auto_activate_base false
The first time you run it, it'll create a .condarc in your home directory with that setting to override the default.
This wouldn't de-clutter your .bash_profile but it's a cleaner solution without manual editing that section that conda manages.
There're 3 ways to achieve this after conda 4.6. (The last method has the highest priority.)
Use sub-command conda config to change the setting.
conda config --set auto_activate_base false
In fact, the former conda config sub-command is changing configuration file .condarc. We can modify .condarc directly. Add following content into .condarc under your home directory,
# auto_activate_base (bool)
# Automatically activate the base environment during shell
# initialization. for `conda init`
auto_activate_base: false
Set environment variable CONDA_AUTO_ACTIVATE_BASE in the shell's init file. (.bashrc for bash, .zshrc for zsh)
export CONDA_AUTO_ACTIVATE_BASE=false
To convert from the condarc file-based configuration parameter name to the environment variable parameter name, make the name all uppercase and prepend CONDA_. For example, conda’s always_yes configuration parameter can be specified using a CONDA_ALWAYS_YES environment variable.
The environment settings take precedence over corresponding settings in .condarc file.
References
The Conda Configuration Engine for Power Users
Using the .condarc conda configuration file
conda config --describe
Conda 4.6 Release
The answer depends a little bit on the version of conda that you have installed. For versions of conda >= 4.4, it should be enough to deactivate the conda environment after the initialization, so add
conda deactivate
right underneath
# <<< conda initialize <<<
To disable auto activation of conda base environment in terminal:
conda config --set auto_activate_base false
To activate conda base environment:
conda activate
So in the end I found that if I commented out the Conda initialisation block like so:
# >>> conda initialize >>>
# !! Contents within this block are managed by 'conda init' !!
# __conda_setup="$('/Users/geoff/anaconda2/bin/conda' 'shell.bash' 'hook' 2> /dev/null)"
# if [ $? -eq 0 ]; then
# eval "$__conda_setup"
# else
if [ -f "/Users/geoff/anaconda2/etc/profile.d/conda.sh" ]; then
. "/Users/geoff/anaconda2/etc/profile.d/conda.sh"
else
export PATH="/Users/geoff/anaconda2/bin:$PATH"
fi
# fi
# unset __conda_setup
# <<< conda initialize <<<
It works exactly how I want. That is, Conda is available to activate an environment if I want, but doesn't activate by default.
If you manage your .bashrc manually and like to keep it simple, all you really need is:
. "$HOME/anaconda2/etc/profile.d/conda.sh"
See Recommended change to enable conda in your shell.
This will make the conda command available without activating the base environment (nor reading your conda config).
Note that this is (of course) not compatible with managing the conda installation with conda init, but other than that, nothing bad is coming from it. You may even experience a significant speedup compared to the conda init generated code, because this solution avoids calling conda to parse your config files on whether to enable the base environment, etc.
It's best to also keep the if/fi lines to avoid error messages if using the same bashrc on several systems where conda may not be installed:
if [ -f "$HOME/anaconda2/etc/profile.d/conda.sh" ]; then
. "$HOME/anaconda2/etc/profile.d/conda.sh"
fi
Finally, if you share your bashrc between several systems where conda may be installed in different paths, you could do as follows:
for CONDA_PREFIX in \
"$HOME/anaconda2" \
"$HOME/miniconda3" \
"/opt/miniconda3" \
do
if [ -f "$CONDA_PREFIX/etc/profile.d/conda.sh" ]; then
. "$CONDA_PREFIX/etc/profile.d/conda.sh"
break
fi
done
Of course, this is now similar in length compared to the conda init generated code, but will still execute much faster, and will likely work better than conda init for users who synchronize their .bashrc between different systems.
for conda 4.12.0 (under WOS) the following worked (where all the previous answers -these included- didn't do the trick):
in your activate.bat file (mine was at ~/miniconda3/Scripts/activate.bat), change the line:
#REM This may work if there are spaces in anything in %*
#CALL "%~dp0..\condabin\conda.bat" activate %*
into
#REM This may work if there are spaces in anything in %*
#CALL "%~dp0..\condabin\conda.bat" deactivate
this line chage/modification doesn't work in the section (of the activate.bat file):
#if "%_args1_first%"=="+" if NOT "%_args1_last%"=="+" (
#CALL "%~dp0..\condabin\conda.bat" activate
#GOTO :End
)
maybe because it depends on how your miniconda3 (Anaconda Prompt) executable is set up: %windir%\System32\cmd.exe "/K" some-path-to\miniconda3\Scripts\activate.bat some-path-to\miniconda3 (in my case).
caveat: updating conda overwrites this (activate.bat) file; so one has to modify the above line as many times as needed/updated. not much of a deal-breaker if you ask me.
This might be a bug of the recent anaconda. What works for me:
step1: vim /anaconda/bin/activate, it shows:
#!/bin/sh
_CONDA_ROOT="/anaconda"
# Copyright (C) 2012 Anaconda, Inc
# SPDX-License-Identifier: BSD-3-Clause
\. "$_CONDA_ROOT/etc/profile.d/conda.sh" || return $?
conda activate "$#"
step2: comment out the last line: # conda activate "$#"
One thing that hasn't been pointed out, is that there is little to no difference between not having an active environment and and activating the base environment, if you just want to run applications from Conda's (Python's) scripts directory (as #DryLabRebel wants).
You can install and uninstall via conda and conda shows the base environment as active - which essentially it is:
> echo $Env:CONDA_DEFAULT_ENV
> conda env list
# conda environments:
#
base * F:\scoop\apps\miniconda3\current
> conda activate
> echo $Env:CONDA_DEFAULT_ENV
base
> conda env list
# conda environments:
#
base * F:\scoop\apps\miniconda3\current

Sourcing .bashrc in Docker via ENTRYPOINT

I have created an image with a bash script called by ENTRYPOINT that itself launches an executable from a conda environment. I'm building this from a single layer directly (for now) which I realize is not best practice, but let's ignore that for a hot second...
Dockerfile
FROM alexholehouse/seq_demo:demo_early
SHELL ["/bin/bash", "-c"]
ENTRYPOINT ["/seq_demo/launcher/launcher.sh"]
Where launcher.sh is
#!/bin/bash
# source bashrc which includes conda init section (and works fine in an interactive terminal)
source /root/.bashrc
# activate the conda environment
conda activate custom_conda
if [ -d /mount ]
then
cd /mount
# run the executable from the conda environment
demo_seq -k KEYFILE.kf
else
echo "No storage mounted..."
fi
Now the problem is that when I build the image using the above Dockerfile, the .bashrc file doesn't get sourced because of the following (standard) line at the top of .bashrc.
[ -z "$PS1" ] && return
... <bashrc stuff>
__conda_setup="$('/root/miniconda3/bin/conda' 'shell.bash' 'hook' 2> /dev/null)"
if [ $? -eq 0 ]; then
eval "$__conda_setup"
else
if [ -f "/root/miniconda3/etc/profile.d/conda.sh" ]; then
. "/root/miniconda3/etc/profile.d/conda.sh"
else
export PATH="/root/miniconda3/bin:$PATH"
fi
fi
unset __conda_setup
So running the image using
docker run -it -v $(pwd):/mount b29c47a060
Means .bashrc is not sourced and the launcher.sh fails because conda can't be found.
If, on the other hand, I edit .bashrc so all the conda stuff is above the [ -z "$PS1" ] && return line then (a) conda gets sourced and (b) the rest of the .bashrc is read too.
Now, editing .bashrc solves my issue but this cannot be the right way to do this! So, what's the correct way to set up an image/Dockerfile so:
(a) A specific bash script gets run upon running the container and
(b) That bash script sources the .bashrc
I feel like I'm just missing something super obvious here...
$PS1 is containing your command prompt (symbol/ e.g. '#: ').
Example of changing prompt
So you have to figure out why PS1 isnt set in the first place. Because [ -z "$PS1" ] && return will exit the script only when $PS1 is not set at all.
When the baseimage youre using doesnt provide any you might just add on in your Dockerfile via ENV PS1.
In case you never ever login into your container to use command line you might just drop that check.
See here for more information about how PS1 is propagated in bash.

Pipenv lock: how to cache downloads for transfer to an offline machine

I am looking for a way to create a self-contained archive of all dependencies required to satisfy a Pipfile.lock. One way to achieve this would be to point PIPENV_CACHE_DIR at an empty temporary directory, run pipenv install, ship the contents of that directory, and use it on the offline machine.
E.g., this should work:
tmpdir=$(mktemp -d)
if [ -n "$offline" ]; then
tar -xf pipenv_cache.tar -C "$tmpdir"
fi
pipenv --rm
PIPENV_CACHE_DIR="$tmpdir" PIP_CACHE_DIR="$tmpdir" pipenv install
if [ -n "$online" ]; then
tar -cf pipenv_cache.tar -C "$tmpdir" .
fi
However, there are a number of problems with this script, one being that it can’t use the online machine’s cache, having to download everything every time instead.
The question is, is there a better way, that doesn’t involve a custom script? Maybe some documented community best practices?
Ideally, there would exist an interface like:
pipenv lock --create-archive <file_name>
pipenv install --from-archive <file_name>
With some Shell scripting work, wheelfreeze can be made to do it.
To create the archive (in a Bash shell):
(. "$(pipenv --venv)"/bin/activate && wheelfreeze <(pipenv lock -r))
And to install from the archive:
wheelfreeze/install "$(pipenv --venv)"
Disclosure: I created wheelfreeze while trying to solve the problem – “to scratch my own itch”, as the saying goes.

deactivate conflict in virtualenvwapper and anaconda

I'm using virtualenv to switch my python dev env. But when I run workon my_env, I meet such error message:
Error: deactivate must be sourced. Run 'source deactivate'
instead of 'deactivate'.
Usage: source deactivate
removes the 'bin' directory of the environment activated with 'source
activate' from PATH.
After some searches on google, it seems that workon, which is defined in /usr/local/bin/virtualenvwrapper.sh, calls deactivate. And there is a script with the same name is present in Anaconda's bin, so it gets called by workon by mistake.
Any suggestion for working around this conflict?
One solution which works for me is to rename deactivate in Anaconda's bin:
mv deactivate conda-deactivate
I agree with the comment by #FredrikHedman that renaming scripts in the anaconda/miniconda bin directory has the potential to be fragile. His full post led me to what I feel is a more robust answer. (Thanks!)
Rather than simply throwing away any errors thrown by calling deactivate, we could simply condition that call on whether the function would be called versus the file. As mentioned, virtualenv and virtualenvwrapper create a function named deactivate; the *condas call a script file of the same name.
So, in the virtualenvwrapper.sh script, we can change the following two lines which test for whether deactivate is merely callable:
type deactivate >/dev/null 2>&1
if [ $? -eq 0 ]
with the stricter test for whether it's a shell function:
if [ -n $ZSH_VERSION ] ; then
nametype="$(type -w deactivate)"
else
nametype="$(type -t deactivate)"
fi
if [ "${nametype##* }" == "function" ]
This change avoids triggering the spurious error noted in the original question, but doesn't risk redirecting other useful errors or output into silent oblivion.
Note the variable substitution on nametype in the comparison. This is because the output of type -w under zsh returns something like "name: type" as opposed to type -t under bash which returns simply "type". The substitution deletes everything up to the last space character, if any spaces exist, leaving only the type value. This harmlessly does nothing in bash.
(Thanks to #toprak for the zsh test and for the correct flag, type -w, under zsh. I look forward to more cross-shell coding tips!)
As ever, I appreciate constructive feedback and comments!
You can edit /usr/local/bin/virtualenvwrapper.sh to make deactivate point to an absolute path to whatever deactivate it is supposed to be referencing.
As I do not have enough reputations to add a comment:
Thomas Capote's suggestion is fine (thx 4 that), except "zsh" does not have a "-t" option for the build-in command "type". Therefore its necessary to add another conditional statement to get desired result for "nametype":
# Anaconda workaround for "source deactivate" message:
# Start of workaround:
#type deactivate >/dev/null 2>&1
#if [ $? -eq 0 ]
if [ -n $ZSH_VERSION ] ; then
nametype="$(type -w deactivate)"
else
nametype="$(type -t deactivate)"
fi
if [ "${nametype##* }" == "function" ]
# End of workaround
Hope it helps other zsh users.
In anaconda activate is an executable script located in the anaconda bin directory, but is a function in in virtualenvwrapper.sh. So this is sort of a namespace collision problem, but also a case of overlap in functionality.
Anacondas is a python distribution and – among many other things – has support for dealing with virtual environment via conda env while the virtualenvwrapper is focused on working with different virtual environments. Just renaming the anaconda/bin/activate script is a brittle solution and may break conda.
The code of virtualenvwrapper.sh (function workon) executes deactivate which happens to use the anaconda script. This script returns an error. The workon code then goes on and removes the deactivate name and sources its own deactivate and also creates a deactivate function on the fly.
In summary, it does the right thing and the "error" can be viewed more as a warning. If you want to make it go away you can modify the workon function (search for the line # Deactivate any current environment "destructively")
deactivate
-->
deactivate >/dev/null 2>&1
(I have suggested this change to the virtualenvwrapper maintainer)

Sharing Python virtualenv environments

I have a Python virtualenv (created with virtualenvwerapper) in one user account. I would like to use it from another user account on the same host.
How can I do this?
How can I set up virtual environments so as to be available to any user on the host? (Primarily Linux / Debian but also Mac OSX.)
Thanks.
Put it in a user-neutral directory, and make it group-readable.
For instance, for libraries, I use /srv/http/share/ for sharing code across web applications.
You could use /usr/local/share/ for normal applications.
I had to do this for workmates. The #Flavius answer worked great once I added a few commands to handle virtualenvwrapper. You need to put your venvs and your WORKON projects folder some place you and your boss/friend can find and use.
sudo mkdir -p /usr/local/share
sudo mv ~/.virtualenvs /usr/local/share
sudo mkdir -p /usr/src/venv/
Assuming you want everyone on the machine to be able to both mkproject and workon:
chmod a+rwx /usr/local/share/.virtualenvs
chmod a+rwx /usr/src/venv
Otherwise chown and chmod to match your security requirements.
If you have any hooks or scripts that expect ~/.virtualenvs to be in the normal place, you better symlink it (on both your user account and your friend's)
ln -s /usr/local/share/.virtualenvs ~/.virtualenvs
Then modify your (and your friend's) .bashrc file to let virtualenvwrapper know where you moved things. Your bashrc should have something like this:
export PROJECT_HOME="/usr/src/venv/"
export WORKON_HOME="/usr/local/share/.virtualenvs"
export USR_BIN=$(dirname $(which virtualenv))
if [ -f $USR_BIN/virtualenvwrapper.sh ]; then
source $USR_BIN/virtualenvwrapper.sh
else
if [ -f /usr/bin/virtualenvwrapper.sh ]; then
source /usr/bin/local/virtualenvwrapper.sh
else
echo "Can't find a virtualenv wrapper installation"
fi
fi
Once you log out and back in (or just source ~/.bashrc you should be good to go with commands like mkproject awesome_new_python_project and workon awesome_new_python_project.
As a bonus, add hooks to load the project folder in sublime every time your workon.

Categories

Resources