Whenever I launch iTerm2 w/zsh i get the following message:
WARNING: python binary not found on PATH.
zsh-autoswitch-virtualenv plugin will be disabled.
If i type this i get the following path:
~ ❯ echo $PATH
/Users/macm1/opt/anaconda3/bin:/Users/macm1/opt/anaconda3/condabin:/usr/local/bin:/Users/macm1/Library/Python/3.8/bin:/Users/macm1/.pyenv/bin:/Users/macm1/.pyenv/bin:/opt/local/bin:/opt/local/sbin:/opt/homebrew/bin:/opt/homebrew/sbin:/Library/Frameworks/Python.framework/Versions/3.10/bin:/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin:~/.dotnet/tools:/Library/Apple/usr/bin:/Library/Frameworks/Mono.framework/Versions/Current/Commands:/Users/macm1/.fig/bin:/Users/macm1/.local/bin:/Users/macm1/.local/bin:/Users/macm1/.local/bin
Also:
~ ❯ which python3
/Users/macm1/opt/anaconda3/bin/python3
~ ❯ python --version
Python 3.9.7
~ ❯ ls -l /usr/bin/python
"/usr/bin/python": No such file or directory (os error 2)
In my zsh shell i have the following:
export PYENV_ROOT="$HOME/.pyenv"
export PATH="$PYENV_ROOT/bin:$PATH"
export PYENV_ROOT="$HOME/.pyenv"
export PATH="$PYENV_ROOT/bin:$PATH"
#export PATH=/Library/Frameworks/Python.framework/Versions/3.8/bin:$PATH
export PATH=/Users/macm1/Library/Python/3.8/bin:$PATH
# Created by `pipx`
export PATH="$PATH:/Users/macm1/.local/bin"
# >>> conda initialize >>>
# !! Contents within this block are managed by 'conda init' !!
__conda_setup="$('/Users/macm1/opt/anaconda3/bin/conda' 'shell.zsh' 'hook' 2> /dev/null)"
if [ $? -eq 0 ]; then
eval "$__conda_setup"
else
if [ -f "/Users/macm1/opt/anaconda3/etc/profile.d/conda.sh" ]; then
. "/Users/macm1/opt/anaconda3/etc/profile.d/conda.sh"
else
export PATH="/Users/macm1/opt/anaconda3/bin:$PATH"
fi
fi
unset __conda_setup
Can anyone assist with this?
so, the problem is that "zsh-autoswitch-virtualenv plugin" is checking for python on ur system.
enter image description here
but you may have installed python3.
so to fix this,
go to ur ~/.zshrc or ~/.bashrc file with vim or nano.
the add this lines at the top:
alias python=python3
alias pip=pip3
This question already has answers here:
How to choose which version of python runs from terminal?
(5 answers)
Closed 5 months ago.
On my Linux server. My python version is python3.6.5.
I run my python files by typing python3 [.py file] in terminal to use python3.6.5.
However, after I download Anaconda3 on server, the python version change to python3.8.8(type python3 --version it shows python3.8.8)
So I guess Anaconda modified ~/.bashrc to change my python version(actually I am not sure which file Anaconda has modified)
I am trying to add
export PYTHONPATH=$PYTHONPATH:/usr/lib/python3.6/site-packages
to change python3 version back to python3.6.5.
But it didn't work.(type python3 --version it still show python3.8.8)
I would like to know which Python3 version the system will choose when I type python3 in terminal.
How can I change my python3 version back to python3.6.5(python3 --version python3.6.5)
my ~/.bashrc:
export PATH=$HOME/bin:/bin:/usr/bin:/usr/local/bin:/opt/bin
export MAIL=/var/spool/mail/$USER
if [ $(uname -s) = 'SunOS' ]; then
export PYTHONPATH=$PYTHONPATH:/usr/lib/python3.6/site-packages
export PATH=$PATH:/usr/ucb:/usr/ccs/bin:/usr/local/workshop/bin
export PATH=$PATH:/usr/X11R6/bin:/usr/X11R5/bin:/usr/openwin/bin
export MANPATH=/usr/man:/usr/local/man:/usr/X11R6/man:/usr/X11R5/man:/usr/motif1.2/man:/usr/share/catman:/opt/SUNWspro/man
# for CXterm
export HZINPUTDIR=/usr/X11R6/lib/X11/cxterm.dic
export HBFPATH=/usr/local/chinese/fonts/cnprint:/usr/X11R6/lib/X11/fonts/chpower
alias b5hztty='hztty -O hz2gb:gb2big -I big2gb:gb2hz'
fi
export PS1='\h:\w> '
alias ls='ls -aF'
alias cp='cp -i'
alias mv='mv -i'
alias rm='rm -i'
ulimit -c 0
umask 077
#Cache Server
...
##
## put command run after interactive login in ~/.bash_profile
##
# >>> conda initialize >>>
# !! Contents within this block are managed by 'conda init' !!
__conda_setup="$('/research/dept8/msc/xcxia21/anaconda3/bin/conda' 'shell.bash' 'hook' 2> /dev/null)"
if [ $? -eq 0 ]; then
eval "$__conda_setup"
else
if [ -f "/research/dept8/msc/xcxia21/anaconda3/etc/profile.d/conda.sh" ]; then
. "/research/dept8/msc/xcxia21/anaconda3/etc/profile.d/conda.sh"
else
export PATH="/research/dept8/msc/xcxia21/anaconda3/bin:$PATH"
fi
fi
unset __conda_setup
You can create alias that call python3 and pointing to python 3.6.
alias python3='directory to python 3.6'
I used anaconda3 for a python3 install. And now it's my default python:
$ which python
/home/xy/anaconda3/bin/python
$ which python3
/home/xy/anaconda3/bin/python
But I need python2 as my default python.
$ which python2
/usr/bin/python2
I tried to edit my .bashrc shown below,
# !! Contents within this block are managed by 'conda init' !!
__conda_setup="$('/home/xy/anaconda3/bin/conda' 'shell.bash' 'hook' 2> /dev/nu
ll)"
if [ $? -eq 0 ]; then
eval "$__conda_setup"
else
if [ -f "/home/xy/anaconda3/etc/profile.d/conda.sh" ]; then
. "/home/xy/anaconda3/etc/profile.d/conda.sh"
else
export PATH="/home/xy/anaconda3/bin:$PATH"
fi
fi
unset __conda_setup
By change the line of export PATH... to
export PATH="$PATH:/home/xy/anaconda3/bin"
It didn't change anything.
How should I set python2 back as the default?
I think the cleanest way to go forward is to do following changes:
1) Edit your ~/.bashrc and do following modifications
Keep this block. Do not edit it. If you already deleted it, you can recreate it by typing conda init bash.
# !! Contents within this block are managed by 'conda init' !!
__conda_setup="$('/home/xy/anaconda3/bin/conda' 'shell.bash' 'hook' 2> /dev/nu
ll)"
if [ $? -eq 0 ]; then
eval "$__conda_setup"
else
if [ -f "/home/xy/anaconda3/etc/profile.d/conda.sh" ]; then
. "/home/xy/anaconda3/etc/profile.d/conda.sh"
else
export PATH="/home/xy/anaconda3/bin:$PATH"
fi
fi
unset __conda_setup
2) Make sure you /home/xy/anaconda3/bin is not added to PATH outside of this block. If so, delete those.
3) Call conda config --set auto_activate_base False in your shell
From now on, you have to activate the anaconda environment manually with conda activate base. If you do not call this, you will default back to your system python.
I have created an image with a bash script called by ENTRYPOINT that itself launches an executable from a conda environment. I'm building this from a single layer directly (for now) which I realize is not best practice, but let's ignore that for a hot second...
Dockerfile
FROM alexholehouse/seq_demo:demo_early
SHELL ["/bin/bash", "-c"]
ENTRYPOINT ["/seq_demo/launcher/launcher.sh"]
Where launcher.sh is
#!/bin/bash
# source bashrc which includes conda init section (and works fine in an interactive terminal)
source /root/.bashrc
# activate the conda environment
conda activate custom_conda
if [ -d /mount ]
then
cd /mount
# run the executable from the conda environment
demo_seq -k KEYFILE.kf
else
echo "No storage mounted..."
fi
Now the problem is that when I build the image using the above Dockerfile, the .bashrc file doesn't get sourced because of the following (standard) line at the top of .bashrc.
[ -z "$PS1" ] && return
... <bashrc stuff>
__conda_setup="$('/root/miniconda3/bin/conda' 'shell.bash' 'hook' 2> /dev/null)"
if [ $? -eq 0 ]; then
eval "$__conda_setup"
else
if [ -f "/root/miniconda3/etc/profile.d/conda.sh" ]; then
. "/root/miniconda3/etc/profile.d/conda.sh"
else
export PATH="/root/miniconda3/bin:$PATH"
fi
fi
unset __conda_setup
So running the image using
docker run -it -v $(pwd):/mount b29c47a060
Means .bashrc is not sourced and the launcher.sh fails because conda can't be found.
If, on the other hand, I edit .bashrc so all the conda stuff is above the [ -z "$PS1" ] && return line then (a) conda gets sourced and (b) the rest of the .bashrc is read too.
Now, editing .bashrc solves my issue but this cannot be the right way to do this! So, what's the correct way to set up an image/Dockerfile so:
(a) A specific bash script gets run upon running the container and
(b) That bash script sources the .bashrc
I feel like I'm just missing something super obvious here...
$PS1 is containing your command prompt (symbol/ e.g. '#: ').
Example of changing prompt
So you have to figure out why PS1 isnt set in the first place. Because [ -z "$PS1" ] && return will exit the script only when $PS1 is not set at all.
When the baseimage youre using doesnt provide any you might just add on in your Dockerfile via ENV PS1.
In case you never ever login into your container to use command line you might just drop that check.
See here for more information about how PS1 is propagated in bash.
How do you create a Bash script to activate a Python virtualenv?
I have a directory structure like:
.env
bin
activate
...other virtualenv files...
src
shell.sh
...my code...
I can activate my virtualenv by:
user#localhost:src$ . ../.env/bin/activate
(.env)user#localhost:src$
However, doing the same from a Bash script does nothing:
user#localhost:src$ cat shell.sh
#!/bin/bash
. ../.env/bin/activate
user#localhost:src$ ./shell.sh
user#localhost:src$
What am I doing wrong?
When you source, you're loading the activate script into your active shell.
When you do it in a script, you load it into that shell which exits when your script finishes and you're back to your original, unactivated shell.
Your best option would be to do it in a function
activate () {
. ../.env/bin/activate
}
or an alias
alias activate=". ../.env/bin/activate"
You should call the bash script using source.
Here is an example:
#!/bin/bash
# Let's call this script venv.sh
source "<absolute_path_recommended_here>/.env/bin/activate"
On your shell just call it like that:
> source venv.sh
Or as #outmind suggested: (Note that this does not work with zsh)
> . venv.sh
There you go, the shell indication will be placed on your prompt.
Although it doesn't add the "(.env)" prefix to the shell prompt, I found this script works as expected.
#!/bin/bash
script_dir=`dirname $0`
cd $script_dir
/bin/bash -c ". ../.env/bin/activate; exec /bin/bash -i"
e.g.
user#localhost:~/src$ which pip
/usr/local/bin/pip
user#localhost:~/src$ which python
/usr/bin/python
user#localhost:~/src$ ./shell
user#localhost:~/src$ which pip
~/.env/bin/pip
user#localhost:~/src$ which python
~/.env/bin/python
user#localhost:~/src$ exit
exit
Sourcing runs shell commands in your current shell. When you source inside of a script like you are doing above, you are affecting the environment for that script, but when the script exits, the environment changes are undone, as they've effectively gone out of scope.
If your intent is to run shell commands in the virtualenv, you can do that in your script after sourcing the activate script. If your intent is to interact with a shell inside the virtualenv, then you can spawn a sub-shell inside your script which would inherit the environment.
Here is the script that I use often. Run it as $ source script_name
#!/bin/bash -x
PWD=`pwd`
/usr/local/bin/virtualenv --python=python3 venv
echo $PWD
activate () {
. $PWD/venv/bin/activate
}
activate
You can also do this using a subshell to better contain your usage - here's a practical example:
#!/bin/bash
commandA --args
# Run commandB in a subshell and collect its output in $VAR
# NOTE
# - PATH is only modified as an example
# - output beyond a single value may not be captured without quoting
# - it is important to discard (or separate) virtualenv activation stdout
# if the stdout of commandB is to be captured
#
VAR=$(
PATH="/opt/bin/foo:$PATH"
. /path/to/activate > /dev/null # activate virtualenv
commandB # tool from /opt/bin/ which requires virtualenv
)
# Use the output from commandB later
commandC "$VAR"
This style is especially helpful when
a different version of commandA or commandC exists under /opt/bin
commandB exists in the system PATH or is very common
these commands fail under the virtualenv
one needs a variety of different virtualenvs
What does sourcing the bash script for?
If you intend to switch between multiple virtualenvs or enter one virtualenv quickly, have you tried virtualenvwrapper? It provides a lot of utils like workon venv, mkvirtualenv venv and so on.
If you just run a python script in certain virtualenv, use /path/to/venv/bin/python script.py to run it.
As others already stated, what you are doing wrong is not sourcing the script you created. When you run the script just like you showed, it creates a new shell which activates the virtual environment and then exits, so there are no changes to your original shell from which you ran the script.
You need to source the script, which will make it run in your current shell.
You can do that by calling source shell.sh or . shell.sh
To make sure the script is sourced instead of executed normally, its nice to have some checks in place in the script to remind you, for example the script I use is this:
#!/bin/bash
if [[ "$0" = "$BASH_SOURCE" ]]; then
echo "Needs to be run using source: . activate_venv.sh"
else
VENVPATH="venv/bin/activate"
if [[ $# -eq 1 ]]; then
if [ -d $1 ]; then
VENVPATH="$1/bin/activate"
else
echo "Virtual environment $1 not found"
return
fi
elif [ -d "venv" ]; then
VENVPATH="venv/bin/activate"
elif [-d "env"]; then
VENVPATH="env/bin/activate"
fi
echo "Activating virtual environment $VENVPATH"
source "$VENVPATH"
fi
It's not bulletproof but it's easy to understand and does its job.
You should use multiple commands in one line. for example:
os.system(". Projects/virenv/bin/activate && python Projects/virenv/django-project/manage.py runserver")
when you activate your virtual environment in one line, I think it forgets for other command lines and you can prevent this by using multiple commands in one line.
It worked for me :)
When I was learning venv I created a script to remind me how to activate it.
#!/bin/sh
# init_venv.sh
if [ -d "./bin" ];then
echo "[info] Ctrl+d to deactivate"
bash -c ". bin/activate; exec /usr/bin/env bash --rcfile <(echo 'PS1=\"(venv)\${PS1}\"') -i"
fi
This has the advantage that it changes the prompt.
As stated in other answers, when you run a script, it creates a sub-shell.
When the script exits, all modifications to that shell are lost.
What we need is actually to run a new shell where the virtual environment is active, and not exit from it.
Be aware, this is a new shell, not the one in use before you run your script.
What this mean is, if you type exit in it, it will exit from the subshell, and return to the previous one (the one where you ran the script), it won't close your xterm or whatever, as you may have expected.
The trouble is, when we exec bash, it reads its rc files (/etc/bash.bashrc, ~/.bashrc), which will change the shell environment. The solution is to provide bash with a way to setup the shell as usual, while additionnally activate the virtual environment. To do this, we create a temporary file, recreating the original bash behavior, and adding a few things we need to enable our venv. We then ask bash to use it instead of its usual rc files.
A beneficial side-effect of having a new shell "dedicated" to our venv, is that to deactivate the virtual environment, the only thing needed is to exit the shell.
I use this in the script exposed below to provide a 'deactivate' option, which acts by sending a signal to the new shell (kill -SIGUSR1), this signal is intercepted (trap ...) and provoke the exit from the shell.
Note: i use SIGUSR1 as to not interfere with whatever could be set in the "normal" behavior.
The script i use:
#!/bin/bash
PYTHON=python3
myname=$(basename "$0")
mydir=$(cd $(dirname "$0") && pwd)
venv_dir="${mydir}/.venv/dev"
usage() {
printf "Usage: %s (activate|deactivate)\n" "$myname"
}
[ $# -eq 1 ] || { usage >&2; exit 1; }
in_venv() {
[ -n "$VIRTUAL_ENV" -a "$VIRTUAL_ENV" = "$venv_dir" -a -n "$VIRTUAL_ENV_SHELL_PID" ]
}
case $1 in
activate)
# check if already active
in_venv && {
printf "Virtual environment already active\n"
exit 0
}
# check if created
[ -e "$venv_dir" ] || {
$PYTHON -m venv --clear --prompt "venv: dev" "$venv_dir" || {
printf "Failed to initialize venv\n" >&2
exit 1
}
}
# activate
tmp_file=$(mktemp)
cat <<EOF >"$tmp_file"
# original bash behavior
if [ -f /etc/bash.bashrc ]; then
source /etc/bash.bashrc
fi
if [ -f ~/.bashrc ]; then
source ~/.bashrc
fi
# activating venv
source "${venv_dir}/bin/activate"
# remove deactivate function:
# we don't want to call it by mistake
# and forget we have an additional shell running
unset -f deactivate
# exit venv shell
venv_deactivate() {
printf "Exitting virtual env shell.\n" >&2
exit 0
}
trap "venv_deactivate" SIGUSR1
VIRTUAL_ENV_SHELL_PID=$$
export VIRTUAL_ENV_SHELL_PID
# remove ourself, don't let temporary files laying around
rm -f "${tmp_file}"
EOF
exec "/bin/bash" --rcfile "$tmp_file" -i || {
printf "Failed to execute virtual environment shell\n" >&2
exit 1
}
;;
deactivate)
# check if active
in_venv || {
printf "Virtual environment not found\n" >&2
exit 1
}
# exit venv shell
kill -SIGUSR1 $VIRTUAL_ENV_SHELL_PID || {
printf "Failed to kill virtual environment shell\n" >&2
exit 1
}
exit 0
;;
*)
usage >&2
exit 1
;;
esac
I simply added this into my .bashrc-personal config file.
function sv () {
if [ -d "venv" ]; then
source "venv/bin/activate"
else
if [ -d ".venv" ]; then
source ".venv/bin/activate"
else
echo "Error: No virtual environment detected!"
fi
fi
}