I am trying to use the guidance on http://www.slideshare.net/fullscreen/randyzwitch/ipython-ec2/12 to install public Ipython notebooks in an AWS instance. One problem that i encounter is, when i try to create a profile, i do not not observe the creation of an ipython_notebook_config_py file (as explained in the tutoral, as per the screenshot), but i only get a ipython_kernel_config.py file, which has very different contents, and cannot be edited in the way as explained in the tutoral. Can someone help me to understand why this happens, and what i should do subsequently? Many thanks.
The notebook server is no longer part of IPython; it's a separate Jupyter project, which has its own config directory ~/.jupyter. The config file for the notebook server is ~/.jupyter/jupyter_notebook_config.py. You can use this to configure the notebook server. If you want to keep multiple configurations of the notebook server, you can use the environment variable JUPYTER_CONFIG_DIR to specify that a different directory should be used:
JUPYTER_CONFIG_DIR=~/jupyter_nbserver jupyter notebook
Note: IPython has not lost its profiles or config files, so ~/.ipython/profile_default/ipython_config.py and startup files, etc. continue to work as before for configuring the IPython kernel, just not the notebook server itself.
References:
Running a public notebook server
Jupyter configuration
Migration from IPython to Jupyter
Related
I've read about Jupyter ContentsManager but have no idea how to use it and the documentation is really bad. What should I do? Where do I run it and how do I connect it to my Jupyter Environment and notebooks?
To swap the contents manager in JupyterLab 3.0+ that is running on the new jupyter server (this is the default way of doing things, but if you are running on JupyterHub it might still use the old notebook server), create a jupyter_server_config.py file; you can auto-generate it in appropriate location using:
jupyter server --generate-config
and set the contents manager class to you own manager:
c.ServerApp.contents_manager_class = "python.module.for.your.ContentsManagerSubclass"
This option is described on the list of configuration options of jupyter server.
(alternatively, you could use a json file). For older JupyterLab versions, or if for some weird reason you are using the old notebook you will want to use jupyter_notebook_config.py where this option was named c.NotebookApp.contents_manager_class. You can read more on this topic here - although the examples reference the old notebook way of doing things so you would need to updated accordingly.
You may also be interested to learn how jupytext hot swaps the content managers reacting to user settings, see the code here.
I have Anaconda Navigator on my work computer and I've changed the default working directory for Jupyter notebooks to be a certain location on the firm server, using the steps given here
I have also created a second environment in Anaconda, for which I would like to use a different Jupyter Notebook working directory than that of the base (root). To do this, I believe I would need to:
Create a second Jupyter Notebook config file
Get the second environment to refer to the new config file, while ensuring that the old file still referred to the original config file.
How would I go about this? Alternate approaches to creating multiple working directories also welcome.
Was looking to achieve the same result: pass different configurations which would include specific settings such as working and workspace directories when launching Jupyter Notebook/Lab in different conda or other virtual environments.
Noticing the following:
jupyter lab --help
--config=<Unicode>
Full path of a config file.
Default: ''
Equivalent to: [--JupyterApp.config_file]
Thus, to achieve the desired result, it is possible to pass the path to the relevant config file upon launching Jupyter Lab/Notebook as such:
jupyter lab --config=~/.jupyter/path_to_my_custom_jupyter_config_for_env_1.py
# or
jupyter notebook --config=~/.jupyter/path_to_my_custom_jupyter_config_for_env_1.py
You can copy your current configuration file, make the relevant adjustments and save it as a different file. You would then pass the path to that new config file when launching Jupyter as per above. To simplify, you can create shortcuts that do that (instead of typing/copying the specific parameters on each launch).
Otherwise (with no specific argument passed), Jupyter Lab/Notebook will launch using the default configuration file if it exists, which is always located in the home directory of the Unix user launching Jupyter (~/.jupyter/jupyter_notebook_config.py).
My Jupyter/IPython notebooks reside in various directories all over my file system. I don't enjoy navigating hierarchies of directories in the Jupyter notebook browser every time I have to open a notebook. In absence of the (still) missing feature allowing to bookmark directories within Jupyter, I want to explore if I can open a notebook from the command line such that it is opened by the Jupyter instance that is already running. I don't know how to do this....
Option 1: Run multiple jupyter notebook servers from your project directory root(s). This avoids navigating deeply nested structures using the browser ui. I often run many notebook servers simultaneously without issue.
$ cd path/to/project/; jupyter notebook;
Option 2: If you know the path you could use webbrowser module
$ python -m webbrowser http://localhost:port/path/to/notebook/notebook-name.ipynb
Of course you could alias frequently accessed notebooks to something nice as well.
I have a custom Jupyter kernel which runs IPython using a custom IPython profile which uses a matplotlib stylesheet.
I know to run this successfully normally I would put:
The matplotlib stylesheet in ~/.config/matplotlib/stylelib/
The IPython profile in ~/.ipython/
The kernel json in ~/.jupyter/kernels/my_kernel/
But I am doing this as part of larger program which runs in a virtualenv, and if I put the things as above then any notebook server running on the computer will be able to see the custom kernels, even if it is running outside the venv. I don't what this because I don't want my program to interfere with other notebooks on the computer.
I think what I need to do is put the things above somewhere equivalent inside the venv but I can't figure out where they should go. Doe anyone know where they would go? Or is this just a thing IPython/Jupiter can't/won't do?
It's probably worth mentioning that in the case of the stylesheet for example I don't want to just put it in the working directory of my program (which is one option matplotlib offers).
You can put kernelspecs in VIRTUAL_ENV/share/jupyter/kernels/ and they will be made available if the notebook server is running in that env. In general, <sys.prefix>/share/jupyter/kernels is included in the path to look for kernelspecs.
You can see the various locations Jupyter will look, you can see the output of jupyter --paths:
$ jupyter --paths
config:
/Users/you/.jupyter
/Users/you/env/etc/jupyter
/usr/local/etc/jupyter
/etc/jupyter
data:
/Users/you/Library/Jupyter
/Users/you/env/share/jupyter
/usr/local/share/jupyter
/usr/share/jupyter
runtime:
/Users/you/Library/Jupyter/runtime
Kernelspecs are considered data files, and will be found in any of those directories listed under data:, in a kernels subdirectory, e.g. /usr/local/share/jupyter/kernels.
There was official(?) recommendation of running an IPython Notebook server, and creating a profile via
$ ipython profile create nbserver
as recommended in http://ipython.org/ipython-doc/1/interactive/public_server.html. This allowed for very different and very useful behavior when starting an IPython Notebook via ipython notebook and ipython notebook --profile=nbserver.
With Jupyter 4.0, there's a change and there are no longer profiles. I've found the conversation https://gitter.im/ipython/ipython/archives/2015/05/29 which has user minrk saying:
The .ipython directory has several things in it:
multiple config directories (called profiles)
one 'data' directory, containing things like kernelspecs, nbextensions
runtime info scattered throughout, but mostly in profiles
Jupyter follows more platform-appropriate conventions:
one config dir at JUPYTER_CONFIG_DIR, default: .jupyter
one data dir at JUPYTER_DATA_DIR, default: platform-specific
one runtime dir at JUPYTER_RUNTIME_DIR, default: platform-specific
And a rather cryptic remark:
If you want to use different config, specify a different config directory with JUPYTER_CONFIG_DIR=whatever
What's the best way to get different behavior (say, between when running as a server vs normal usage)?
Will it involve running something like:
$ export JUPYTER_CONFIG_DIR=~/.jupyter-nbserver
$ jupyter notebook
whenever a server 'profile' needs to be run? and
$ export JUPYTER_CONFIG_DIR=~/.jupyter
$ jupyter notebook
whenever a 'normal' profile needs to run? Because that seems terrible. What's the best way to do this in Jupyter 4.0?
Using some code from this blog post http://www.svds.com/jupyter-notebook-best-practices-for-data-science/ and updating it. The easiest solution appears to be to create an alias, like:
alias jupyter-nbserver='JUPYTER_CONFIG_DIR=~/.jupyter-nbserver jupyter notebook'
So now you can run the jupyter notebook with a different config via the simple command jupyter-nbserver.
A more robust solution might involve creating a bash function that changes the environment variable, checks whether there's a config file, if not creating one, then executing, but that's probably overkill. The answer that I give on this related question https://stackoverflow.com/a/32516200/246856 goes into creating the initial config files for a new 'profile'.