I have installed extension 'calico-document-tools' and I can load it from within Jupyter notebook using:
%%javascript
IPython.load_extensions('calico-document-tools');
How can I load it automatically for each opened notebook?
I tried adding IPython.load_extensions('calico-document-tools'); or IPython.load_extensions('C:/Users/<username>/.ipython/nbextensions/calico-document-tools'); to C:\Users\<username>\.ipython\profile_default\static\custom\custom.js but it didn't work (extension should display a number of buttons on the toolbar).
I have only one profile, created with ipython profile create, Python 3.3, Windows 7. Thanks in advance.
To install the extensions I followed the instructions in this notebook (not available anymore).
I adjusted them a little to be compatible with ipython4 (where the notebook server is called jupyter). This command installs the extension globally:
$ jupyter nbextension install https://github.com/Calysto/notebook-extensions/archive/master.zip
Then enable this extension:
$ jupyter nbextension enable calico-document-tools
When you now open or reload a notebook, it should load the extension
Updating the config to enable the extensions can also be done from inside the notebook:
%%javascript
IPython.notebook.config.update({
"load_extensions": {"calico-spell-check":true,
"calico-document-tools":true,
"calico-cell-tools":true
}
})
Related
I followed along Semaphore's Blog Post on testing Jupyter notebooks using pytest and Nbmake. This is a great post, and testing worked great. Summarizing how I applied the blog post:
run pip install pytest nbmake into a virtual env
run pytest --nbmake notebooks -where notebooks is a folder containing my *.ipynb files
It's working correctly, because when I add an intentional error cell, the test fails.
What I'd like to know is the minimal set of additional libraries and commands that are necessary for me to be able to interactively run my notebooks as well in the same environment. I know that you can also add the --overwrite flag to inspect the results, and this is definitely very useful, but that's not what I'm asking for. In particular, I'd like to have steps (3) and (4) which:
pip install some additional libraries -or maybe we can even skip this step altogether?
awesome-jupyter-command notebooks/foo.ipynb -so now the jupyter kernel is started and automatically displays foo.ipynb for interactive evaluation
Most jupyter server commands (.e.g jupyter notebook and jupyter lab) accept a directory or notebook file as a positional argument, so you can do:
pip install jupyterlab
jupyter lab notebooks/foo.ipynb
which will launch the server and open the specified file.
Some other examples, for different flavors of UI:
# 'retro' single-document interface with new features
pip install retrolab
jupyter retro notebooks/foo.ipynb
# 'classic' application, which is trying to push folks to lab-based UI
pip install notebook
jupyter notebook notebooks/foo.ipynb
There's also nbopen which adds an additional step of checking for already-runnning servers, rather than always starting a new one:
pip install nbopen
nbopen notebooks/foo.ipynb
I have some code in a .ipynb file and got it to the point where I don't really need the "interactive" feature of IPython Notebook. I would like to just run it straight from a Mac Terminal Command Line.
Basically, if this were just a .py file, I believe I could just do python filename.py from the command line. Is there something similar for a .ipynb file?
nbconvert allows you to run notebooks with the --execute flag:
jupyter nbconvert --execute <notebook>
If you want to run a notebook and produce a new notebook, you can add --to notebook:
jupyter nbconvert --execute --to notebook <notebook>
Or if you want to replace the existing notebook with the new output:
jupyter nbconvert --execute --to notebook --inplace <notebook>
Since that's a really long command, you can use an alias:
alias nbx="jupyter nbconvert --execute --to notebook"
nbx [--inplace] <notebook>
From the command line you can convert a notebook to python with this command:
jupyter nbconvert --to python nb.ipynb
https://github.com/jupyter/nbconvert
You may have to install the python mistune package:
sudo pip install -U mistune
In your Terminal run ipython:
ipython
then locate your script and put there:
%run your_script.ipynb
You can export all your code from .ipynb and save it as a .py script. Then you can run the script in your terminal.
Hope it helps.
Using ipython:
ipython --TerminalIPythonApp.file_to_run=<notebook>.ipynb
Normally, I would prefer this option as it is really self-describing. If you prefer to use less characters, use:
ipython -c "%run <notebook>.ipynb"
which is basically what Keto already suggested (unfortunately a little bit hidden) as a comment.
In my case, the command that best suited me was:
jupyter nbconvert --execute --clear-output <notebook>.ipynb
Why? This command does not create extra files (just like a .py file) and the output of the cells is overwritten everytime the notebook is executed.
If you run:
jupyter nbconvert --help
--clear-output
Clear output of current file and save in place, overwriting the existing notebook.
For new version instead of:
ipython nbconvert --to python <YourNotebook>.ipynb
You can use jupyter instend of ipython:
jupyter nbconvert --to python <YourNotebook>.ipynb
Update with quoted comment by author for better visibility:
Author's note "This project started before Jupyter's execute API, which is now the recommended way to run notebooks from the command-line. Consider runipy deprecated and unmaintained." – Sebastian Palma
Install runipy library that allows running your code on terminal
pip install runipy
After just compiler your code:
runipy <YourNotebookName>.ipynb
You can try cronjob as well. All information is here
I had the same problem and I found papermill. The advantages against the others solutions is that you can see the results while the notebook is running. I find this feature interesting when the notebook takes very long. It is very easy to use:
pip install papermill
papermill notebook.ipynb output.ipynb
It has also, other handy options as saving the output file to Amazon S3, Google Cloud, etc. See the page for more information.
You can also use the boar package to run your notebook within a python code.
from boar.running import run_notebook
outputs = run_notebook("nb.ipynb")
If you update your notebook, you won't have to convert it again to a python file.
More information at:
https://github.com/alexandreCameron/boar/blob/master/USAGE.md
There is now a jupyter run subcommand that will execute a notebook.
jupyter run notebook.ipynb
More on this command can be found in the Jupyter documentation.
From the terminal run
jupyter nbconvert --execute --to notebook --inplace --allow-errors --ExecutePreprocessor.timeout=-1 my_nb.ipynb
The default timeout is 30 seconds. -1 removes the restriction.
If you wish to save the output notebook to a new notebook you can use the flag --output my_new_nb.ipynb
You can also use jupytext https://jupytext.readthedocs.io/en/latest/index.html.
This allows you to pair your notebook. So for each ipynb file you have a .py file as well with some comments. The .py file can be executed as usual.
You can enjoy benefits of two worlds with the cost of one extra file though.
Oh, and by the way if you are using version control you can only commit .py files with a nice diff instead of the ugly .ipynb diffs.
(The syntax in the .py files is similar to Databricks notebooks iy you are familiar with them...)
In a batch file paste the below. Use ipython to run .ipynb files.
#echo on
call "C:\ProgramData\Anaconda3\Scripts\activate.bat"
ipython "path\test.ipynb"
pause
I've made some research on this topic and wrote an article on 4 ways to run Jupyter Notebook in command line below is summary of my findings.
1. Use nbconvert
The nbconvert package is already installed in Jupyter Notebook. It can be used to execute notebook. Additionally it has many features:
can export notebook to PDF or HTML,
can hide code in output notebook,
can execute notebook even with errors in cells.
Example notebook execution:
jupyter nbconvert --execute --to notebook --allow-errors your-notebook.ipynb
The above command will output your-notebook.nbconvert.ipynb file and will execute all cells even with errors.
2. Use papermill
The papermill allows you to parametrize notebook. You can define variables as parameters (with cell tags).
Example command:
papermill -p name Piotrek your-notebook.ipynb output-notebook.ipynb
Example notebook with injected parameters:
3. Manually download notebook as .py script
There is an option to manually download notebook as .py script:
After download you can add execution rights to the file and run it as a command line script.
4. Use jupytext
The jupytext package allows you to synchronize .ipynb file with .py file. You don't need to manually convert notebook to script.
The new Jupyter Lab is great, but I am missing the option to turn cells into slides. In classic Jupyter Notebooks, that was under "View > Cell Toolbar > Slideshow":
What happened to the feature? Is there a way to edit slides in Jupyter Lab?
I have jupyter lab 1.1.4 installed on Ubuntu 18.04.3 LTS and am using a python 3 virtual env kernel and it works great. Just complete your notebook and then configure each cell using the "Notebook Tools" tab on the far left (as shown in screenshot). Then save and close the notebook and run below command to output the slides.
Open the terminal and navigate to the recently saved .ipynb and run
jupyter nbconvert Untitled2.ipynb --to slides
For slides or say
jupyter nbconvert Untitled2.ipynb --to pdf
For a pdf
Note: You might need to install the Tex package to perfrom pdf outputs. See the docs here or just run the below command to install it.
sudo apt-get install texlive-xetex texlive-fonts-recommended texlive-generic-recommended
Cheers
In Jupyter Lab you can alter the 'slide type' in the 'Cell Inspector' menu.
It is still possible using nbconvert
Presenting Code Using Jupyter Notebook Slides
command you need to run:
jupyter nbconvert jupyter_notebook.ipynb --to slides --post serve
if you require the slideshow cell editing functionality you can either set it in the metadata as explained in this question, or you can switch from Lab to Notebook by going Help >> Launch Classic Notebook.
re:
However, this doesn’t give me the slideshow cell editing functionality I need in the notebook.
Issue description
Whenever loading rpy2 in a Jupyter notebook, R code executed with a %%Rcell magic gets printed in the command prompt instead of the notebook cell. Plots show up properly in the notebook though.
I also noticed that the %R inline magic works properly, with code printed in the cell as expected.
Installation steps
Python 2.7.11 :: Anaconda 2.5.0 (64-bit)
notebook 4.1.0 installed via pip
R 3.3.1 (x64) located in Program Files
rpy2-2.7.8-cp27-none-win_amd64.whl installed via pip
PATH:
R_USER = C:\Users\myusername
R_HOME = C:\Program Files\R\R-3.3.1
Added C:\Program Files\R\R-3.3.1\bin\i386
You can use the RWinOut Jupyter extension. The following installation instructions are taken from the GitHub page:
You can run the following curl command from a Jupyter notebook cell to download the file to your working directory. You can also download it manually and put it there yourself.
!curl -O "https://raw.githubusercontent.com/vitorcurtis/RWinOut/master/RWinOut.py"
Once it's in your working directory, you can replace %load_ext rpy2.ipython at the top of your script with %load_ext RWinOut. Then you should be able to see the output of cells containing the %%R magic as normal.
This isn't a complete solution, but it's a workaround that might achieve the same effect. I'm not sure if this breaks some functionality, but it seems to work fine for me.
Tested with:
Windows 10 v.1809 Build 17763.503
Python 3.7.2
R 3.6.0
rpy2 2.9.5
One other option is to use the Windows Subsystem for Linux and launch your Jupyter notebook from there. This might not be desirable if you have to reinstall a lot of R and Python packages into a different environment, but will make it so the output will print correctly without requiring this hacky workaround.
I have some code in a .ipynb file and got it to the point where I don't really need the "interactive" feature of IPython Notebook. I would like to just run it straight from a Mac Terminal Command Line.
Basically, if this were just a .py file, I believe I could just do python filename.py from the command line. Is there something similar for a .ipynb file?
nbconvert allows you to run notebooks with the --execute flag:
jupyter nbconvert --execute <notebook>
If you want to run a notebook and produce a new notebook, you can add --to notebook:
jupyter nbconvert --execute --to notebook <notebook>
Or if you want to replace the existing notebook with the new output:
jupyter nbconvert --execute --to notebook --inplace <notebook>
Since that's a really long command, you can use an alias:
alias nbx="jupyter nbconvert --execute --to notebook"
nbx [--inplace] <notebook>
From the command line you can convert a notebook to python with this command:
jupyter nbconvert --to python nb.ipynb
https://github.com/jupyter/nbconvert
You may have to install the python mistune package:
sudo pip install -U mistune
In your Terminal run ipython:
ipython
then locate your script and put there:
%run your_script.ipynb
You can export all your code from .ipynb and save it as a .py script. Then you can run the script in your terminal.
Hope it helps.
Using ipython:
ipython --TerminalIPythonApp.file_to_run=<notebook>.ipynb
Normally, I would prefer this option as it is really self-describing. If you prefer to use less characters, use:
ipython -c "%run <notebook>.ipynb"
which is basically what Keto already suggested (unfortunately a little bit hidden) as a comment.
In my case, the command that best suited me was:
jupyter nbconvert --execute --clear-output <notebook>.ipynb
Why? This command does not create extra files (just like a .py file) and the output of the cells is overwritten everytime the notebook is executed.
If you run:
jupyter nbconvert --help
--clear-output
Clear output of current file and save in place, overwriting the existing notebook.
For new version instead of:
ipython nbconvert --to python <YourNotebook>.ipynb
You can use jupyter instend of ipython:
jupyter nbconvert --to python <YourNotebook>.ipynb
Update with quoted comment by author for better visibility:
Author's note "This project started before Jupyter's execute API, which is now the recommended way to run notebooks from the command-line. Consider runipy deprecated and unmaintained." – Sebastian Palma
Install runipy library that allows running your code on terminal
pip install runipy
After just compiler your code:
runipy <YourNotebookName>.ipynb
You can try cronjob as well. All information is here
I had the same problem and I found papermill. The advantages against the others solutions is that you can see the results while the notebook is running. I find this feature interesting when the notebook takes very long. It is very easy to use:
pip install papermill
papermill notebook.ipynb output.ipynb
It has also, other handy options as saving the output file to Amazon S3, Google Cloud, etc. See the page for more information.
You can also use the boar package to run your notebook within a python code.
from boar.running import run_notebook
outputs = run_notebook("nb.ipynb")
If you update your notebook, you won't have to convert it again to a python file.
More information at:
https://github.com/alexandreCameron/boar/blob/master/USAGE.md
There is now a jupyter run subcommand that will execute a notebook.
jupyter run notebook.ipynb
More on this command can be found in the Jupyter documentation.
From the terminal run
jupyter nbconvert --execute --to notebook --inplace --allow-errors --ExecutePreprocessor.timeout=-1 my_nb.ipynb
The default timeout is 30 seconds. -1 removes the restriction.
If you wish to save the output notebook to a new notebook you can use the flag --output my_new_nb.ipynb
You can also use jupytext https://jupytext.readthedocs.io/en/latest/index.html.
This allows you to pair your notebook. So for each ipynb file you have a .py file as well with some comments. The .py file can be executed as usual.
You can enjoy benefits of two worlds with the cost of one extra file though.
Oh, and by the way if you are using version control you can only commit .py files with a nice diff instead of the ugly .ipynb diffs.
(The syntax in the .py files is similar to Databricks notebooks iy you are familiar with them...)
In a batch file paste the below. Use ipython to run .ipynb files.
#echo on
call "C:\ProgramData\Anaconda3\Scripts\activate.bat"
ipython "path\test.ipynb"
pause
I've made some research on this topic and wrote an article on 4 ways to run Jupyter Notebook in command line below is summary of my findings.
1. Use nbconvert
The nbconvert package is already installed in Jupyter Notebook. It can be used to execute notebook. Additionally it has many features:
can export notebook to PDF or HTML,
can hide code in output notebook,
can execute notebook even with errors in cells.
Example notebook execution:
jupyter nbconvert --execute --to notebook --allow-errors your-notebook.ipynb
The above command will output your-notebook.nbconvert.ipynb file and will execute all cells even with errors.
2. Use papermill
The papermill allows you to parametrize notebook. You can define variables as parameters (with cell tags).
Example command:
papermill -p name Piotrek your-notebook.ipynb output-notebook.ipynb
Example notebook with injected parameters:
3. Manually download notebook as .py script
There is an option to manually download notebook as .py script:
After download you can add execution rights to the file and run it as a command line script.
4. Use jupytext
The jupytext package allows you to synchronize .ipynb file with .py file. You don't need to manually convert notebook to script.