I'm looking for a way to turn OFF autosave in iPython notebook. I've seen references via Google/Stack Overflow searches on how to turn ON autosave but I want the opposite (to turn OFF autosave). It would be preferential if this was something that could be set permanently rather than at the top of each notebook.
This will disable autosave once you're in IPython Notebook in the browser: %autosave 0.
Update: There is now a UI feature in JupyterLab: https://github.com/jupyterlab/jupyterlab/pull/3734
If you add this to your custom.js, it will disable autosave for all notebooks:
$([IPython.events]).on("notebook_loaded.Notebook", function () {
IPython.notebook.set_autosave_interval(0);
});
custom.js is found at $(ipython locate profile)/static/custom/custom.js. You can use the same thing to increase or decrease the autosave interval. The value is in milliseconds, so an interval of 30000 means autosave every thirty seconds.
The original solution from MinRK is outdated, partly because IPython/Jupyter keeps changing so much. I can't find proper documentation for this, other than an indirect reference here, but according to this forum post, the solution now seems to be to edit or create the file ~/.jupyter/custom/custom.js, and add the line:
Jupyter.notebook.set_autosave_interval(0); // disable autosave
This works for me. You can confirm if it works by looking for the brief "Autosave disabled" box in the top right corner of the Jupyter notebook on startup. The full solution in the forum post did not work for me, probably because it is no longer completely valid, and errors in the custom.js file seem to occur silently.
Step-by-Step solution for Jupyter Notebook 5.5.0 on Windows (will probably work for other envs/versions as well)
Find the Jupyter configuration folder:
from jupyter_core.paths import jupyter_config_dir
jupyter_dir = jupyter_config_dir() # C:\users\<user_name>\.jupyter on my machine
create sub-folder custom, and create file custom.js within it:
i.e. 'C:\users\<user_name>\.jupyter\custom\custom.js'
Put the following line in custom.js:
IPython.notebook.set_autosave_interval(0);
Save file and restart the Jupyter Notebook server (main app).
When opening a notebook you should see "Autosave disabled" briefly appearing in the right side of the menu bar:
Edit: The autosave interval on notebook load does not appear to work any more in recent version of Jupyter Notebook (jupyter notebook --version at 6.0.1). So I'm back to the custom.js solution:
mkdir -p ~/.jupyter/custom
echo "Jupyter.notebook.set_autosave_interval(0);" >> ~/.jupyter/custom/custom.js
As pointed out by Thomas Maloney above, JupyterLab now has a command for that (Uncheck Autosave Documents in the Settings menu).
In Jupyter Notebook, I think the autosavetime extension is easier to use than the custom.js file. The autosavetime extension is part of the Jupyter notebook extensions and can be installed with
pip install jupyter_contrib_nbextensions
jupyter contrib nbextension install
jupyter nbextension enable autosavetime/main
Once it is installed, restart jupyter notebook and go to nbextensions_config in the Edit menu. Select the autosavetime extension, and turn off autosave as follows:
check the box Set an autosave interval on notebook load. If false, the default is unchanged.,
enter 0 for Autosave interval (in minutes) which would be set on notebook load.
To test the modification: open or create a Python notebook and execute, in a new cell,
%%javascript
element.text(Jupyter.notebook.autosave_interval);
If the result is 0, you have successfully turned the autosave off. Congratulations!
As of Jupyter 4.4 (2019), a working solution is to add this to your custom.js file:
require(['base/js/namespace', 'base/js/events'], function (Jupyter, events) {
Jupyter.notebook.set_autosave_interval(0);
console.log("Auto-save has been disabled.");
});
Without the require block the javascript will execute prior to the Jupyter object being available, resulting in an error.
Just to be clear, custom.js should reside at ~/.jupyter/custom/custom.js -- you must create the custom directory if it does not exist.
Related
My code connects with a database and sometimes the database disconnects on me. As result the script ends. I would like to be able to add a line of code that would allow me to restart and run all the cells in Jupyter notebook.
Input:
if condition ==True:
#Kernel restart and run all jupyter cells
I understand there is already a question that may seem similar but it is not. It only creates a button that you can click to restart and run all the cell
How to code "Restart Kernel and Run all" in button for Python Jupyter Notebook?
Thank you
Would a keyboard shortcut suffice?
For JupyterLab users and those using the document-centric notebook experience in the future, see How to save a lot of time by have a short cut to Restart Kernel and Run All Cells?.
For those still using the classic notebook (version 6 and earlier) interface to run Jupyter notebooks:
A lot of the classic notebook 'tricks' will cease to work with version 7 of the document-centric notebook experience (what most people not consider the 'classic notebook interface') that is on the horizon. The version 7 and forward will use tech that is currently underlying JupyterLab, see Build Jupyter Notebook v7 off of JupyterLab components. And so moving towards JupyterLab now will help you in the long run.
I am still new to vscode, but I am having trouble getting some of the tools for python Jupyter notebooks to work in VSCode Version: 1.56.2 on ubuntu linux 20.04 LTS.
So according to the documentation, there are supposed to be buttons for debugging, including a button to "run code by line". This makes it easier to debug any code issues in a notebook cell. The documentation suggests the notebook interface should look like this.
The buttons in the upper left are the ones that I am interested in.
Now, when I look at my own interface, it looks like this.
So the two interfaces look very different. I am not sure if I need to change any settings in VSCode to enable these buttons. The documentation did not mention changes to any settings.
Any suggestions would be appreciated.
"Run code by line" has not yet been implemented for the new notebooks interface that you are seeing. In the meantime, you can opt back into the old interface with "Run code by line" support by doing the following:
Open your user settings.json by typing Ctrl+Shift+P > "Preferences: Open Settings (JSON)"
Add the following line to your user settings.json file:
"jupyter.experiments.optOutFrom": ["NativeNotebookEditor"]
If the workbench.editorAssociations setting is present in your settings.json file, delete it.
Reload VS Code for the new settings to take effect
In vscode 1.59 (see https://github.com/microsoft/vscode-docs/blob/vnext/release-notes/v1_59.md#jupyter-run-by-line)
Jupyter "Run By Line"
We've been working on supporting the "Run By Line" feature in Jupyter
notebooks. This feature is essentially a simplified debug mode that
lets you step through your cell's code line by line without any
complex debug UI. This is still experimental, but you can try it out
by setting "jupyter.experimental.debugging": true, installing
version 6 of ipykernel in your selected kernel, then clicking the "Run
By Line" button in the cell toolbar.
"jupyter.experimental.debugging": true
Of interest
We have been working on supporting debugging in Jupyter notebooks, so
that you can set breakpoints in notebook cells, execute cells
step-by-step, and use all other VS Code debugger features. This is
experimental, but you can try it out by setting
"jupyter.experimental.debugging": true, installing version 6 of
ipykernel in your selected kernel, then clicking the "Debug" button in
the notebook toolbar.
in vscode v.158, https://github.com/microsoft/vscode-docs/blob/vnext/release-notes/v1_58.md#jupyter-notebook-debugging
Make a debug point and just press F10 ; Debug will started automatically
I'm starting to work more with Jupyter notebooks, and am really starting to like it. However, I find it difficult to use it with my particular setup.
I have a workstation for running all the notebooks, but for a large part of my day I'm on-the-go with a space-constrained laptop with no power outlets. I'd like to be able to edit (but not run) these notebooks without installing and running the full Jupyter server backend, which I imagine would suck up a lot of power.
My question is: Is it possible for me to edit (and not run) notebooks without running the Jupyter server?
You could use one of the following options
1. ipynb-py-convert
With this module you can do a conversion from .py to .ipynb and vice-versa:
ipynb-py-convert ~/name_of_notebook.ipynb ~/name_of_notebook.py
where according to the documentation the cells are left as they are. To get back a jupyter notebook
ipynb-py-convert ~/name_of_notebook.py ~/name_of_notebook.ipynb
2. Ipython
However, you could also do a conversion to .py when you want to work it with an editor like VS Code or Sublime Text after you have download your .ipynb file with ipython:
ipython nbconvert --to python name_of_your_notebook.ipynb
As I was asking this question, I had opened the notebook locally in Visual Studio Code, but the preview was just the raw text representation of the notebook, so I had assumed that it needed the backend to run.
However, I was about to press submit on the question when I checked back in on it, and the notebook showed up just fine. So one solution is to open it in VS Code and wait a little bit.
Jupyter lab has this feature where I can have a ipython console for every notebook I have opened. Whenever I run a cell inside this notebook, the console will have all the variables defined and modules imported corresponding to notebook. In addition, we can run extra commands and helps in debugging at times. Is there a similar feature in VS code? I really like it and would like to move completely to vs code. Python interactive command line in vscode is the closest to this that I found. However, it is not attached to the notebook and I have to run all the code inside the notebook which is a bit tedious.
I believe this would work Connecting a terminal to an existing kernel
However, you're likely looking for a way to do this within VS code. You might be able to do this by running %connect_info in a cell, starting a terminal, and then running the appropriate jupyter command.
Something like so:
jupyter console --existing kernel-2c0993da-95c7-435a-9140-118c10d33e1a.json
If you're refering to .py files you can do that the same way you would in pycharm.
First, you need to put a breakpoint in the code:
Them you run the code with the debugger:
Then, when the code reaches the breakpoint, you will be able to play with the variables, like the Jupyter terminal:
I also like to have a JupyterLab-style console open that is connected to a notebook. This is my workaround in order to achieve this in Visual Studio Code (at least it works when my kernel is a remote Jupyter session).
Suppose your notebook is called hello.ipynb.
Create a dummy file called hello.py.
Open hello.py, right-click in the code window and choose Run Current File in Interactive Window. This opens the JupyterLab-style console.
Change the kernel for the interactive window to the same kernel that the notebook hello.ipynb is using.
(Optional) Close the hello.py tab since it is not needed.
Now I have an interactive window sharing everything with the notebook.
I use Jupyter Notebook to run a series of experiments that take some time.
Certain cells take way too much time to execute so it's normal that I'd like to close the browser tab and come back later. But when I do the kernel interrupts running.
I guess there is a workaround for this but I can't find it
The simplest workaround to this seems to be the built-in cell magic %%capture:
%%capture output
# Time-consuming code here
Save, close tab, come back later. The output is now stored in the output variable:
output.show()
This will show all interim print results as well as the plain or rich output cell.
TL;DR:
Code doesn't stop on tab closes, but the output can no longer find the current browser session and loses data on how it's supposed to be displayed, causing it to throw out all new output received until the code finishes that was running when the tab closed.
Long Version:
Unfortunately, this isn't implemented (Nov 24th). If there's a workaround, I can't find it either. (Still looking, will update with news.) There is a workaround that saves output then reprints it, but won't work if code is still running in that notebook. An alternative would be to have a second notebook that you can get the output in.
I also need this functionality, and for the same reason. The kernel doesn't shut down or interrupt on tab closes. And the code doesn't stop running when you close a tab. The warning given is exactly correct, "The kernel is busy, outputs may be lost."
Running
import time
a = 0
while a < 100:
a+=1
print(a)
time.sleep(1)
in one box, then closing the tab, opening it up again, and then running
print(a)
from another box will cause it to hang until the 100 seconds have finished and the code completes, then it will print 100.
When a tab is closed, when you return, the python process will be in the same state you left it (when the last save completed). That was their intended behavior, and what they should have been more clear about in their documentation. The output from the run code actually gets sent to the browser upon reopening it, (lost the reference that explains this,) so hacks like the one in this comment will work as it can receive those and just throw them into some cell.
Output is kind of only saved in an accessible way through the endpoint connection. They've been working on this for a while (before Jupyter), although I cannot find the current bug in the Jupyter repository (this one references it, but is not it).
The only general workaround seems to be finding a computer you can always leave on, and leaving that on the page while it runs, then remote in or rely on autosave to be able to access it elsewhere. This is a bad way to do it, but unfortunately, the way I have to for now.
Related questions:
Closed IPython Notebook that was running code
Confirms that output will not be updated, but does not mention the interrupt functionality.
IPython Notebook - Keep printing to notebook output after closing browser
Offers a workaround in a link. Referenced above
First, install
runipy
pip install runipy
And now run your notebook in the background with the below command:
nohup runipy YourNotebook.ipynb OutputNotebook.ipynb >> notebook.log &
now the output file will be saved and also you can see the logs while running with:
tail -f notebook.log
I am struggling with this issue as well for some time now.
My workaround was to write all my logs to a file, so that when my browser closes (indeed when a lot of logs come through browser it hangs up too) I can see the kernel job process by opening the log file (the log file can be open using Jupyter too).
#!/usr/bin/python
import time
import datetime
import logging
logger = logging.getLogger()
def setup_file_logger(log_file):
hdlr = logging.FileHandler(log_file)
formatter = logging.Formatter('%(asctime)s %(levelname)s %(message)s')
hdlr.setFormatter(formatter)
logger.addHandler(hdlr)
logger.setLevel(logging.INFO)
def log(message):
#outputs to Jupyter console
print('{} {}'.format(datetime.datetime.now(), message))
#outputs to file
logger.info(message)
setup_file_logger('out.log')
for i in range(10000):
log('Doing hard work here i=' + str(i))
log('Taking a nap now...')
time.sleep(1000)
With JupyterLab:
This is not a problem if you are using JupyterLab (with current release v3.x.x).
To be more specific, not a problem means that, after we close the tab/browser, the notebook's kernel is kept running (so long as the jupyter server/your terminal is not closed). But the printing output of the cell (if there is any) is interrupted.
So, when we reopen the notebook, variables and etc. are all kept and updated, except the interrupted printing output.
If you care about the printing info in this case, you could try to logging it to a file. OR try using Jupyter's execute API (see below).
With Jupyter Notebook:
If you are still sticking with legacy (e.g. version 5.x/6.x) Jupyter Notebook, well, it is still not possible in the past (i.e prior to 2022).
BUT, with the planned new Notebook v7 release, by reusing the the JupyterLab codebase, this problem will also be solved in the new Jupyter Notebook.
So, try using JupyterLab or wait and updating to Notebook v7:
$ jupyter lab --version
$ 3.4.4
$ # OR waite and update the notebook, untill
$ # make sure the installed version of notebook is v7
$ jupyter notebook --version
$ 6.4.12
With Jupyter's execute API:
Other workaround is by using Jupyter's execute API:
$ jupyter nbconvert --to notebook --execute mynotebook.ipynb
This is like running the notebook as a .py file, i.e. from the command line, not a web browser UI mode.
After its execution, a new file named mynotebook.nbconvert.ipynb will be produced, and all printing output will be kept in it, but all variables will be lost. What we could do is pickling the variables that we care about.
And I don't think using runipy is still a good choice, since it's deprecated and unmaintained (after Jupyter's execute API).
ref:
Q: is it possible to make a jupyter notebook run even if the page is closed?
A: This is being solved in JupyterLab and will be solved in the future Notebook v7 release.
If you've set all cells to run and want to periodically check what's being printed, the following code would be a better option than %%capture. You can always open up the log file while kernel is busy.
import sys
sys.stdout = open("my_log.txt", "a")
I've constructed this awhile ago using jupyter nbconvert, essentially running a notebook in the background without any UI:
nohup jupyter nbconvert --ExecutePreprocessor.timeout=-1 --CodeFoldingPreprocessor.remove_folded_code=False --ExecutePreprocessor.allow_errors=True --ExecutePreprocessor.kernel_name=python3 --execute --to notebook --inplace ~/mynotebook.ipynb > ~/stdout.log 2> ~/stderr.log &
timeout=-1 no time out
remove_folded_code=False if you have Codefolding extension enabled
allow_errors=True ignore errored cells and continue running the notebook to the end
kernel_name if you have multiple kernels, check with jupyter kernelspec list