JUPYTER_PATH in environment variables not working - python

I am trying to update JUPYTER_PATH for Jupyter notebook. I set the environment variables following the jupyter documentation but still jupyter contrib nbextension install --user, for example, installed under C:\Users\username\AppData\nbextensions instead of C:\somedir\AppData\Roaming\jupyter\nbextensions.
Added these to my environment variables.
JUPYTER_CONFIG_DIR=C:\somedir\.jupyter
JUPYTER_PATH=C:\somedir\AppData\Roaming\jupyter
JUPYTER_RUNTIME_DIR=C:\somedir\AppData\Roaming\jupyter
jupyter --path shows
PS C:\somedir\> jupyter --path
config:
C:\somedir\.jupyter
C:\anaconda\python27\win64\431\etc\jupyter
C:\ProgramData\jupyter
data:
C:\somedir\AppData\Roaming\jupyter
C:\Users\username\AppData\Roaming\jupyter
C:\anaconda\python27\win64\431\share\jupyter
C:\ProgramData\jupyter
runtime:
C:\somedir\AppData\Roaming\jupyter
jupyter --data-dir shows
jupyter --data-dir
C:\Users\username\AppData\Roaming\jupyter
I think C:\Users\username\AppData\Roaming\jupyter needs to be removed but nor sure how. Can you please help?

To set the user data directory, you should instead use the JUPYTER_DATA_DIR environment variable, in your case set to C:\somedir\AppData\Roaming\jupyter. You can also unset JUPYTER_PATH (see below for details).
Although it's not terribly obvious from documentation, the nbextension install command takes no notice of the JUPYTER_PATH environment variable, since it doesn't use the jupyter_core.paths.jupyter_path function, but uses jupyter_core.paths.jupyter_data_dir to construct the user-data nbextensions directory directly.
The entry C:\Users\username\AppData\Roaming\jupyter from the data section of the output of jupyter --paths is the user data directory, since JUPYTER_PATH is used in addition to other entries, rather than replacing any. For your purposes, I suggest you unset JUPYTER_PATH, since you can get what you want without it.

Related

Easiest way to launch a Jupyter notebook after installing `Nbmake`

I followed along Semaphore's Blog Post on testing Jupyter notebooks using pytest and Nbmake. This is a great post, and testing worked great. Summarizing how I applied the blog post:
run pip install pytest nbmake into a virtual env
run pytest --nbmake notebooks -where notebooks is a folder containing my *.ipynb files
It's working correctly, because when I add an intentional error cell, the test fails.
What I'd like to know is the minimal set of additional libraries and commands that are necessary for me to be able to interactively run my notebooks as well in the same environment. I know that you can also add the --overwrite flag to inspect the results, and this is definitely very useful, but that's not what I'm asking for. In particular, I'd like to have steps (3) and (4) which:
pip install some additional libraries -or maybe we can even skip this step altogether?
awesome-jupyter-command notebooks/foo.ipynb -so now the jupyter kernel is started and automatically displays foo.ipynb for interactive evaluation
Most jupyter server commands (.e.g jupyter notebook and jupyter lab) accept a directory or notebook file as a positional argument, so you can do:
pip install jupyterlab
jupyter lab notebooks/foo.ipynb
which will launch the server and open the specified file.
Some other examples, for different flavors of UI:
# 'retro' single-document interface with new features
pip install retrolab
jupyter retro notebooks/foo.ipynb
# 'classic' application, which is trying to push folks to lab-based UI
pip install notebook
jupyter notebook notebooks/foo.ipynb
There's also nbopen which adds an additional step of checking for already-runnning servers, rather than always starting a new one:
pip install nbopen
nbopen notebooks/foo.ipynb

Jupyter Book build fails after default create

I'm still learning how this all works, so please bear with me.
I'm running conda 4.8.5 on my Windows 10 machine. I've already installed all necessary Jupyter extensions, I think (Jupyter Lab, Jupyter Notebook, Jupyter Book, Node.js, and their dependencies).
The problem might have to do with the fact that I've installed Miniconda on a separate (D:/) drive.
I've set up a virtual environment (MyEnv) with all the packages I might need for this project. These are the steps I follow:
Launch CMD window
$ conda activate MyEnv
$ jupyter-lab --notebook-dir "Documents/Jupyter Books"
At this point a browser tab opens running Jupyter Lab
From the launcher within Jupyter Lab, open a terminal
$ cd "Documents/Jupyter Books"
$ jb create MyCoolBook
New folder with template book contents gets created in this directory (Yay!)
Without editing anything: $ jb build MyCoolBook
A folder gets added to MyCoolBook called _build, but it doesn't contain much more than a few CSS files.
The terminal throws this error traceback which wasn't very helpful to me. The issue may be obvious to an experienced user.
I am not sure how to proceed. I've reset the entire environment a few times trying to get this to work. What do you suggest? I'm considering submitting a bug report but I want to rule out the very reasonable possibility that I'm being silly.
I asked around in the Github page/forum for Jupyter Book. Turns out it's a matter of text encoding in Windows (I could have avoided this by reading deep into the documentation).
If anyone runs across this issue just know that it can be fixed by reverting to some release, Python 3.7.*, and setting an environment variable (PYTHONUTF8=1) but this is not something I would recommend because some other packages might require the default system encoding. Instead, follow the instructions in this section of the documentation.

Unable to get environment variables using os python

I recently created two environment variables in my terminal as shown below
export SPARK_HOME='/opt/spark/'
export HAIL_HOME='/home/ABCD/.pyenv/versions/3.7.2/envs/bio/lib/python3.7/site-packages/hail/'
When I use echo $SPARK_HOME or echo $HAIL_HOME, I am able to see the path as output
But, when I use the below os commands in jupyter notebook
os.getenv('SPARK_HOME') # able to get the output /opt/spark/
os.getenv('HAIL_HOME') # returns no output
I also tried defining the same variables from jupyter as well using os.putenv but even then,I see output only for SPARK_HOME
However, I am able to see in my terminal screen the environment variablesSPARK_HOME and HAIL_HOME using printenv command
Can help me understand what's the problem?
I realized that it doesn't produce output for HAIL_HOME because it is installed in my virtual environment. (see .pyenv which is a hidden folder for my virtual environment`)
However, if anyone can confirm this it's even better
use os.environ.get("SPARK_HOME").

How to place custom Jupyter kernels inside virtual environment?

I have a custom Jupyter kernel which runs IPython using a custom IPython profile which uses a matplotlib stylesheet.
I know to run this successfully normally I would put:
The matplotlib stylesheet in ~/.config/matplotlib/stylelib/
The IPython profile in ~/.ipython/
The kernel json in ~/.jupyter/kernels/my_kernel/
But I am doing this as part of larger program which runs in a virtualenv, and if I put the things as above then any notebook server running on the computer will be able to see the custom kernels, even if it is running outside the venv. I don't what this because I don't want my program to interfere with other notebooks on the computer.
I think what I need to do is put the things above somewhere equivalent inside the venv but I can't figure out where they should go. Doe anyone know where they would go? Or is this just a thing IPython/Jupiter can't/won't do?
It's probably worth mentioning that in the case of the stylesheet for example I don't want to just put it in the working directory of my program (which is one option matplotlib offers).
You can put kernelspecs in VIRTUAL_ENV/share/jupyter/kernels/ and they will be made available if the notebook server is running in that env. In general, <sys.prefix>/share/jupyter/kernels is included in the path to look for kernelspecs.
You can see the various locations Jupyter will look, you can see the output of jupyter --paths:
$ jupyter --paths
config:
/Users/you/.jupyter
/Users/you/env/etc/jupyter
/usr/local/etc/jupyter
/etc/jupyter
data:
/Users/you/Library/Jupyter
/Users/you/env/share/jupyter
/usr/local/share/jupyter
/usr/share/jupyter
runtime:
/Users/you/Library/Jupyter/runtime
Kernelspecs are considered data files, and will be found in any of those directories listed under data:, in a kernels subdirectory, e.g. /usr/local/share/jupyter/kernels.

How do I get IPython profile behavior from Jupyter 4.x?

There was official(?) recommendation of running an IPython Notebook server, and creating a profile via
$ ipython profile create nbserver
as recommended in http://ipython.org/ipython-doc/1/interactive/public_server.html. This allowed for very different and very useful behavior when starting an IPython Notebook via ipython notebook and ipython notebook --profile=nbserver.
With Jupyter 4.0, there's a change and there are no longer profiles. I've found the conversation https://gitter.im/ipython/ipython/archives/2015/05/29 which has user minrk saying:
The .ipython directory has several things in it:
multiple config directories (called profiles)
one 'data' directory, containing things like kernelspecs, nbextensions
runtime info scattered throughout, but mostly in profiles
Jupyter follows more platform-appropriate conventions:
one config dir at JUPYTER_CONFIG_DIR, default: .jupyter
one data dir at JUPYTER_DATA_DIR, default: platform-specific
one runtime dir at JUPYTER_RUNTIME_DIR, default: platform-specific
And a rather cryptic remark:
If you want to use different config, specify a different config directory with JUPYTER_CONFIG_DIR=whatever
What's the best way to get different behavior (say, between when running as a server vs normal usage)?
Will it involve running something like:
$ export JUPYTER_CONFIG_DIR=~/.jupyter-nbserver
$ jupyter notebook
whenever a server 'profile' needs to be run? and
$ export JUPYTER_CONFIG_DIR=~/.jupyter
$ jupyter notebook
whenever a 'normal' profile needs to run? Because that seems terrible. What's the best way to do this in Jupyter 4.0?
Using some code from this blog post http://www.svds.com/jupyter-notebook-best-practices-for-data-science/ and updating it. The easiest solution appears to be to create an alias, like:
alias jupyter-nbserver='JUPYTER_CONFIG_DIR=~/.jupyter-nbserver jupyter notebook'
So now you can run the jupyter notebook with a different config via the simple command jupyter-nbserver.
A more robust solution might involve creating a bash function that changes the environment variable, checks whether there's a config file, if not creating one, then executing, but that's probably overkill. The answer that I give on this related question https://stackoverflow.com/a/32516200/246856 goes into creating the initial config files for a new 'profile'.

Categories

Resources