Set Jupyter Notebook password in bash script through environment variable - python

I have come across the following code in a bash script for automatic deployment and setup of Jupyter Notebook. There are issues when the password contains a #, as the rest of the line is treated as a comment (and probably raises an exception in Python due to lack of closing bracket):
# Set up Jupyter Notebook with the same password
su - $USERNAME -c "$SHELL +x" << EOF
mkdir -p ~/.jupyter
python3 -c "from notebook.auth.security import set_password; set_password(password=\"$PASSWORD\")"
EOF
Obviously this is not an ideal way to set a password due to the above issues.
I am wondering what best practice would be in this situation? Using triple-quotes instead of single enclosing quotes might be better, but still runs into issues if the supplied password has triple quotes. How is this situation normally tackled?

Instead of passing the password through a bash here-document, just use python to read it from the environment:
# Set up Jupyter Notebook with the same password
su - $USERNAME -c "$SHELL +x" << EOF
mkdir -p ~/.jupyter
python3 -c "from notebook.auth.security import set_password; import os; set_password(password=os.environ['PASSWORD'])"
EOF

You can pass Jupyter command line optionsenter link description here to notebook like its done for docker. The method is save as it will use ths sha. the start-notebook.sh script looks like this:source.
#!/bin/bash
# Copyright (c) Jupyter Development Team.
# Distributed under the terms of the Modified BSD License.
set -e
wrapper=""
if [[ "${RESTARTABLE}" == "yes" ]]; then
wrapper="run-one-constantly"
fi
if [[ ! -z "${JUPYTERHUB_API_TOKEN}" ]]; then
# launched by JupyterHub, use single-user entrypoint
exec /usr/local/bin/start-singleuser.sh "$#"
elif [[ ! -z "${JUPYTER_ENABLE_LAB}" ]]; then
. /usr/local/bin/start.sh $wrapper jupyter lab "$#"
else
. /usr/local/bin/start.sh $wrapper jupyter notebook "$#"
fi
For example, to secure the Notebook server with a custom password hashed using IPython.lib.passwd() instead of the default token, you can run the following to test via docker:
#generate a password
from IPython.lib.security import passwd
passwd(passphrase=None, algorithm='sha1')
# run test in docker
docker run -d -p 8888:8888 jupyter/base-notebook start-notebook.sh --notebookApp.password='sha1:31246c00022b:cd4a5e1eac6b621284331642a27b621948d80a92'

Related

Create an automatic routine to run several python script, using a shell script

I am trying to create a shell script that executes some python scripts in order.
I have never done something like this and i don´t know if what i am doing is correct.
Before executing the last file i need to activate an enviroment so it has the necesary imports to so it functions correctly.
This is the script i have created (execute.sh):
#!/bin/sh
python path/file_1.py
python path/file_2.py
python path/file_3.py
# activate a virtual enviroment
source /path/env/bin/activate
python path/file_4.py
After this what sould I do?
Just write this line in the terminal?
./execute.sh
That should work. Usually I always do this when running a bash script with Python scripts inside.
#!/bin/bash
. /home/user/.bashrc # Load basic env as we're coming from cron
# Cd into our working directory in case we're not into it already
cd "$(dirname "$0")";
echo "----------------"
echo "Starting processing `date`"
echo "----------------"
conda activate models # Activate my conda environment
# Execute Python script
python script.py
if [[ $? = 0 ]]; then
echo "Succesfull execution!"
else
echo "Something wrong"
exit 1
fi
This is in execute.run, which you then make executable chmod +x executable.run, and run. This will work wherever you are because it will always try to cd into the directory where the script resides, so you can also call it from crontab.

Interact with docker container in the middle of a bash script execution [in that container]

I want to start a bunch of docker containers with a help of a Python script. I am using subprocess library for that. Essentially, I am trying to run this docker command
docker = f"docker run -it --rm {env_vars} {hashes} {results} {script} {pipeline} --name {project} {CONTAINER_NAME}"
in a new terminal window.
Popen(f'xterm -T {project} -geometry 150x30+100+350 -e {docker}', shell=True)
# or
Popen(f'xfce4-terminal -T {project} --minimize {hold} -e="{docker}"', shell=True)
Container's CMD looks like this. It's a bash script that runs other scripts and funtions in them.
CMD ["bash", "/run_pipeline.sh"]
What I am trying to do is to run an interective shell (bash) from one of these nested scripts in a specific place in case of a failure (i.e. when some condition met) to be able to investigate the problem in script, do something to fix it and continue execution (or just exit if I could not fix it).
if [ $? -ne 0 ]; then
echo Investigate manually: "$REPO_NAME"
bash
if [ $? -ne 0 ]; then exit 33; fi
fi
I want to do these fully automatically so I don't have to manually keep track of what is going on with a script and execute docker attach... when needed, because I will run multiple of such containers simultaneously.
The problem is that this "rescue" bash process exits immediately and I don't know why. I think it should be something about ttys and stuff, but I've tried bunch of fiddling around with it and had no success.
I tried different combinations of -i, -t and -d of a docker command, tried to use docker attach... right after starting container with -d and also tried starting python script directly from bash in a terminal (I am using Pycharm by default). Besides I tried to use socat, screen, script and getty commands (in nested bash script), but I don't know how to use them properly so it didn't end good as well. At this point I'm too confused to understand why it isn't working.
EDIT:
Adding minimal NOT reproducable (of what is not working) example of how I am starting a container.
# ./Dockerfile
FROM debian:bookworm-slim
SHELL ["bash", "-c"]
CMD ["bash", "/run_pipeline.sh"]
# run 'docker build -t test .'
# ./small_example.py
from subprocess import Popen
if __name__ == '__main__':
env_vars = f"-e REPO_NAME=test -e PROJECT=test_test"
script = f'-v "$(pwd)"/run_pipeline.sh:/run_pipeline.sh:ro'
docker = f"docker run -it --rm {env_vars} {script} --name test_name test"
# Popen(f'xterm -T test -geometry 150x30+100+350 +hold -e "{docker}"', shell=True).wait()
Popen(f'xfce4-terminal -T test --hold -e="{docker}"', shell=True).wait()
# ./run_pipeline.sh
# do some hard work
ls non/existent/path
if [ $? -ne 0 ]; then
echo Investigate manually: "$REPO_NAME"
bash
if [ $? -ne 0 ]; then exit 33; fi
fi
It seems like the problem can be in a run_pipeline.sh script, but I don't want to upload it here, it's a bigger mess than what I described earlier. But I will say anyway that I am trying to run this thing - https://github.com/IBM/D2A.
So I just wanted some advice on a tty stuff that I am probably missing.
Run the initial container detached, with input and a tty.
docker run -dit --rm {env_vars} {script} --name test_name test
Monitor the container logs for the output, then attach to it.
Here is a quick script example (without a tty in this case, only because of the demo using echo to input)
#!/bin/bash
docker run --name test_name -id debian \
bash -c 'echo start; sleep 10; echo "reading"; read var; echo "var=$var"'
while ! docker logs test_name | grep reading; do
sleep 3
done
echo "attach input" | docker attach test_name
The complete output after it finishes:
$ docker logs test_name
start
reading
var=attach input
The whole process would be easier to control via the Docker Python SDK rather than having a layer of shell between the python and Docker.
As I said in a comment to Matt's answer, his solution in my situation does not work either. I think it's a problem with the script that I'm running. I think it's because some of the many shell processes (https://imgur.com/a/JiPYGWd) are taking up allocated tty, but I don't know for sure.
So I came up with my own workaround. I simply block an execution of the script by creating a named pipe and then reading it.
if [ $? -ne 0 ]; then
echo Investigate _make_ manually: "$REPO_NAME"
mkfifo "/tmp/mypipe_$githash" && echo "/tmp/mypipe_$githash" && read -r res < "/tmp/mypipe_$githash"
if [ $res -ne 0 ]; then exit 33; fi
fi
Then I just launch terminal emulator and execute docker exec in it to start a new bash process. I do it with a help of Docker Python SDK by monitoring the output of a container so I know when to launch terminal.
def monitor_container_output(container):
line = b''
for log in container.logs(stream=True):
if log == b'\n':
print(line.decode())
if b'mypipe_' in line:
Popen(f'xfce4-terminal -T {container.name} -e="docker exec -it {container.name} bash"', shell=True).wait()
line = b''
continue
line += log
client = docker.from_env()
conatiner = client.containers.run(IMAGE_NAME, name=project, detach=True, stdin_open=True, tty=True,
auto_remove=True, environment=env_vars, volumes=volumes)
monitor_container_output(container)
After I finish my investigation of a problem in that new bash process, I will send "status code of investigation" to tell the script to continue running or exit.
echo 0 > "/tmp/mypipe_$githash"

Sourcing .bashrc in Docker via ENTRYPOINT

I have created an image with a bash script called by ENTRYPOINT that itself launches an executable from a conda environment. I'm building this from a single layer directly (for now) which I realize is not best practice, but let's ignore that for a hot second...
Dockerfile
FROM alexholehouse/seq_demo:demo_early
SHELL ["/bin/bash", "-c"]
ENTRYPOINT ["/seq_demo/launcher/launcher.sh"]
Where launcher.sh is
#!/bin/bash
# source bashrc which includes conda init section (and works fine in an interactive terminal)
source /root/.bashrc
# activate the conda environment
conda activate custom_conda
if [ -d /mount ]
then
cd /mount
# run the executable from the conda environment
demo_seq -k KEYFILE.kf
else
echo "No storage mounted..."
fi
Now the problem is that when I build the image using the above Dockerfile, the .bashrc file doesn't get sourced because of the following (standard) line at the top of .bashrc.
[ -z "$PS1" ] && return
... <bashrc stuff>
__conda_setup="$('/root/miniconda3/bin/conda' 'shell.bash' 'hook' 2> /dev/null)"
if [ $? -eq 0 ]; then
eval "$__conda_setup"
else
if [ -f "/root/miniconda3/etc/profile.d/conda.sh" ]; then
. "/root/miniconda3/etc/profile.d/conda.sh"
else
export PATH="/root/miniconda3/bin:$PATH"
fi
fi
unset __conda_setup
So running the image using
docker run -it -v $(pwd):/mount b29c47a060
Means .bashrc is not sourced and the launcher.sh fails because conda can't be found.
If, on the other hand, I edit .bashrc so all the conda stuff is above the [ -z "$PS1" ] && return line then (a) conda gets sourced and (b) the rest of the .bashrc is read too.
Now, editing .bashrc solves my issue but this cannot be the right way to do this! So, what's the correct way to set up an image/Dockerfile so:
(a) A specific bash script gets run upon running the container and
(b) That bash script sources the .bashrc
I feel like I'm just missing something super obvious here...
$PS1 is containing your command prompt (symbol/ e.g. '#: ').
Example of changing prompt
So you have to figure out why PS1 isnt set in the first place. Because [ -z "$PS1" ] && return will exit the script only when $PS1 is not set at all.
When the baseimage youre using doesnt provide any you might just add on in your Dockerfile via ENV PS1.
In case you never ever login into your container to use command line you might just drop that check.
See here for more information about how PS1 is propagated in bash.

Sourcing/Activating Python VirtualEnv in a Bash Script

I am trying to write a simple script that will help me activate (source) the virtualenv and set some environment variables at the same time. Below is the current version that does not yet have any environment variables.
#!/bin/bash
if [ -z "$BASH_VERSION" ]
then
exec bash "$0" "$#"
fi
# Your script here
script_dir=`dirname $0`
cd $script_dir
/bin/bash -c ". ./django-env/bin/activate; exec /bin/bash -i"
The trouble with this script is two-fold.
When I run it - seemingly successfully as it changes the command line prefix to (django-env) - it is missing the My-Computer-Name: in front of it. Obviously, it is an indication of something as I typically have (django-env) My-Computer-Name: as the prefix.
It does not activate the virtualenv properly. Namely, when I check which python, I am notified that the virtualenv Python is used. On the other hand, when I check for which pip or which python3, the global system's Python is used.
What can I do to fix these things and have the environment be activated?
I suspect the problem with exec /bin/bash -i — the executed bash could run .bash_profile and .bashrc that change the current environment.
Instead of a shell script that executes shells upon shells you better create an alias or a shell function:
django_activate() {
cd $1
. ./django-env/bin/activate
}
Put it in .bashrc so it will be available in all shells and run as django_activate $venv_dir; for example django_activate ~/projects/work.
The following code does what I intended it to do. I run it with source script.sh
#!/bin/bash
if [ -z "$BASH_VERSION" ]
then
exec bash "$0" "$#"
fi
# Your script here
script_dir=`dirname $0`
cd $script_dir
/bin/bash -c ". ./django-env/bin/activate"

How to source virtualenv activate in a Bash script

How do you create a Bash script to activate a Python virtualenv?
I have a directory structure like:
.env
bin
activate
...other virtualenv files...
src
shell.sh
...my code...
I can activate my virtualenv by:
user#localhost:src$ . ../.env/bin/activate
(.env)user#localhost:src$
However, doing the same from a Bash script does nothing:
user#localhost:src$ cat shell.sh
#!/bin/bash
. ../.env/bin/activate
user#localhost:src$ ./shell.sh
user#localhost:src$
What am I doing wrong?
When you source, you're loading the activate script into your active shell.
When you do it in a script, you load it into that shell which exits when your script finishes and you're back to your original, unactivated shell.
Your best option would be to do it in a function
activate () {
. ../.env/bin/activate
}
or an alias
alias activate=". ../.env/bin/activate"
You should call the bash script using source.
Here is an example:
#!/bin/bash
# Let's call this script venv.sh
source "<absolute_path_recommended_here>/.env/bin/activate"
On your shell just call it like that:
> source venv.sh
Or as #outmind suggested: (Note that this does not work with zsh)
> . venv.sh
There you go, the shell indication will be placed on your prompt.
Although it doesn't add the "(.env)" prefix to the shell prompt, I found this script works as expected.
#!/bin/bash
script_dir=`dirname $0`
cd $script_dir
/bin/bash -c ". ../.env/bin/activate; exec /bin/bash -i"
e.g.
user#localhost:~/src$ which pip
/usr/local/bin/pip
user#localhost:~/src$ which python
/usr/bin/python
user#localhost:~/src$ ./shell
user#localhost:~/src$ which pip
~/.env/bin/pip
user#localhost:~/src$ which python
~/.env/bin/python
user#localhost:~/src$ exit
exit
Sourcing runs shell commands in your current shell. When you source inside of a script like you are doing above, you are affecting the environment for that script, but when the script exits, the environment changes are undone, as they've effectively gone out of scope.
If your intent is to run shell commands in the virtualenv, you can do that in your script after sourcing the activate script. If your intent is to interact with a shell inside the virtualenv, then you can spawn a sub-shell inside your script which would inherit the environment.
Here is the script that I use often. Run it as $ source script_name
#!/bin/bash -x
PWD=`pwd`
/usr/local/bin/virtualenv --python=python3 venv
echo $PWD
activate () {
. $PWD/venv/bin/activate
}
activate
You can also do this using a subshell to better contain your usage - here's a practical example:
#!/bin/bash
commandA --args
# Run commandB in a subshell and collect its output in $VAR
# NOTE
# - PATH is only modified as an example
# - output beyond a single value may not be captured without quoting
# - it is important to discard (or separate) virtualenv activation stdout
# if the stdout of commandB is to be captured
#
VAR=$(
PATH="/opt/bin/foo:$PATH"
. /path/to/activate > /dev/null # activate virtualenv
commandB # tool from /opt/bin/ which requires virtualenv
)
# Use the output from commandB later
commandC "$VAR"
This style is especially helpful when
a different version of commandA or commandC exists under /opt/bin
commandB exists in the system PATH or is very common
these commands fail under the virtualenv
one needs a variety of different virtualenvs
What does sourcing the bash script for?
If you intend to switch between multiple virtualenvs or enter one virtualenv quickly, have you tried virtualenvwrapper? It provides a lot of utils like workon venv, mkvirtualenv venv and so on.
If you just run a python script in certain virtualenv, use /path/to/venv/bin/python script.py to run it.
As others already stated, what you are doing wrong is not sourcing the script you created. When you run the script just like you showed, it creates a new shell which activates the virtual environment and then exits, so there are no changes to your original shell from which you ran the script.
You need to source the script, which will make it run in your current shell.
You can do that by calling source shell.sh or . shell.sh
To make sure the script is sourced instead of executed normally, its nice to have some checks in place in the script to remind you, for example the script I use is this:
#!/bin/bash
if [[ "$0" = "$BASH_SOURCE" ]]; then
echo "Needs to be run using source: . activate_venv.sh"
else
VENVPATH="venv/bin/activate"
if [[ $# -eq 1 ]]; then
if [ -d $1 ]; then
VENVPATH="$1/bin/activate"
else
echo "Virtual environment $1 not found"
return
fi
elif [ -d "venv" ]; then
VENVPATH="venv/bin/activate"
elif [-d "env"]; then
VENVPATH="env/bin/activate"
fi
echo "Activating virtual environment $VENVPATH"
source "$VENVPATH"
fi
It's not bulletproof but it's easy to understand and does its job.
You should use multiple commands in one line. for example:
os.system(". Projects/virenv/bin/activate && python Projects/virenv/django-project/manage.py runserver")
when you activate your virtual environment in one line, I think it forgets for other command lines and you can prevent this by using multiple commands in one line.
It worked for me :)
When I was learning venv I created a script to remind me how to activate it.
#!/bin/sh
# init_venv.sh
if [ -d "./bin" ];then
echo "[info] Ctrl+d to deactivate"
bash -c ". bin/activate; exec /usr/bin/env bash --rcfile <(echo 'PS1=\"(venv)\${PS1}\"') -i"
fi
This has the advantage that it changes the prompt.
As stated in other answers, when you run a script, it creates a sub-shell.
When the script exits, all modifications to that shell are lost.
What we need is actually to run a new shell where the virtual environment is active, and not exit from it.
Be aware, this is a new shell, not the one in use before you run your script.
What this mean is, if you type exit in it, it will exit from the subshell, and return to the previous one (the one where you ran the script), it won't close your xterm or whatever, as you may have expected.
The trouble is, when we exec bash, it reads its rc files (/etc/bash.bashrc, ~/.bashrc), which will change the shell environment. The solution is to provide bash with a way to setup the shell as usual, while additionnally activate the virtual environment. To do this, we create a temporary file, recreating the original bash behavior, and adding a few things we need to enable our venv. We then ask bash to use it instead of its usual rc files.
A beneficial side-effect of having a new shell "dedicated" to our venv, is that to deactivate the virtual environment, the only thing needed is to exit the shell.
I use this in the script exposed below to provide a 'deactivate' option, which acts by sending a signal to the new shell (kill -SIGUSR1), this signal is intercepted (trap ...) and provoke the exit from the shell.
Note: i use SIGUSR1 as to not interfere with whatever could be set in the "normal" behavior.
The script i use:
#!/bin/bash
PYTHON=python3
myname=$(basename "$0")
mydir=$(cd $(dirname "$0") && pwd)
venv_dir="${mydir}/.venv/dev"
usage() {
printf "Usage: %s (activate|deactivate)\n" "$myname"
}
[ $# -eq 1 ] || { usage >&2; exit 1; }
in_venv() {
[ -n "$VIRTUAL_ENV" -a "$VIRTUAL_ENV" = "$venv_dir" -a -n "$VIRTUAL_ENV_SHELL_PID" ]
}
case $1 in
activate)
# check if already active
in_venv && {
printf "Virtual environment already active\n"
exit 0
}
# check if created
[ -e "$venv_dir" ] || {
$PYTHON -m venv --clear --prompt "venv: dev" "$venv_dir" || {
printf "Failed to initialize venv\n" >&2
exit 1
}
}
# activate
tmp_file=$(mktemp)
cat <<EOF >"$tmp_file"
# original bash behavior
if [ -f /etc/bash.bashrc ]; then
source /etc/bash.bashrc
fi
if [ -f ~/.bashrc ]; then
source ~/.bashrc
fi
# activating venv
source "${venv_dir}/bin/activate"
# remove deactivate function:
# we don't want to call it by mistake
# and forget we have an additional shell running
unset -f deactivate
# exit venv shell
venv_deactivate() {
printf "Exitting virtual env shell.\n" >&2
exit 0
}
trap "venv_deactivate" SIGUSR1
VIRTUAL_ENV_SHELL_PID=$$
export VIRTUAL_ENV_SHELL_PID
# remove ourself, don't let temporary files laying around
rm -f "${tmp_file}"
EOF
exec "/bin/bash" --rcfile "$tmp_file" -i || {
printf "Failed to execute virtual environment shell\n" >&2
exit 1
}
;;
deactivate)
# check if active
in_venv || {
printf "Virtual environment not found\n" >&2
exit 1
}
# exit venv shell
kill -SIGUSR1 $VIRTUAL_ENV_SHELL_PID || {
printf "Failed to kill virtual environment shell\n" >&2
exit 1
}
exit 0
;;
*)
usage >&2
exit 1
;;
esac
I simply added this into my .bashrc-personal config file.
function sv () {
if [ -d "venv" ]; then
source "venv/bin/activate"
else
if [ -d ".venv" ]; then
source ".venv/bin/activate"
else
echo "Error: No virtual environment detected!"
fi
fi
}

Categories

Resources