I am using jupyter-dash in VSCode. I have a big iteration. sometime I'd like to stop running.
I tried few options in the below link and they never work.
https://stackoverflow.com/questions/58230077/vscode-python-interactive-window-how-to-stop-jupyter-server#:~:text=You%20can%20stop%20it%20using%20jupyter%2Dnotebook%20stop%20.
How to close IPython Notebook properly?
If I use below with port number like this:
jupyter-notebook stop 8064
error:
[NbserverStopApp] WARNING | Config option `kernel_spec_manager_class` not recognized by `NbserverStopApp`.
As mentioned in the problem you pasted, jupyter cannot guarantee interruption, so the best solution is to restart jupyter manually by clicking Reastart or Reload VsCode.
My code connects with a database and sometimes the database disconnects on me. As result the script ends. I would like to be able to add a line of code that would allow me to restart and run all the cells in Jupyter notebook.
Input:
if condition ==True:
#Kernel restart and run all jupyter cells
I understand there is already a question that may seem similar but it is not. It only creates a button that you can click to restart and run all the cell
How to code "Restart Kernel and Run all" in button for Python Jupyter Notebook?
Thank you
Would a keyboard shortcut suffice?
For JupyterLab users and those using the document-centric notebook experience in the future, see How to save a lot of time by have a short cut to Restart Kernel and Run All Cells?.
For those still using the classic notebook (version 6 and earlier) interface to run Jupyter notebooks:
A lot of the classic notebook 'tricks' will cease to work with version 7 of the document-centric notebook experience (what most people not consider the 'classic notebook interface') that is on the horizon. The version 7 and forward will use tech that is currently underlying JupyterLab, see Build Jupyter Notebook v7 off of JupyterLab components. And so moving towards JupyterLab now will help you in the long run.
I can't import statsmodels.api in Jupyter Notebook anymore. I thought that it requires updating statsmodels.api. Then I typed "Conda update statsmodels.api". Then, the message comes up below.
PackageNotInstalledError: Package is not installed in prefix.
prefix: XXX
package name: statsmodels.api
Note: you may need to restart the kernel to use updated packages.
In order to update statsmodels.api, it seems that it would require restarting the kernel. But when trying to restart kernel, the warning came up as below.
"Do you want to restart the current kernel? All variables will be lost."
What does "all variable will be lost" mean? Will I lose all the things saved at Jupyter notebook? If so, how can I restart the Kernel safely without losing all the things I keep in my Jupyter notebook?
Restarting your kernel will reset your Jupyter notebook and remove all variables or methods you have defined.
You will not lose the code written by you. Just that, you have to run all the code cell again to set the variables and methods.
OR,
You can do "Restart & Run All"
It will show the message--
Are you sure you want to restart the current kernel and re-execute the whole notebook? All variables and outputs will be lost.
However, after selecting the above option, all the variables and methods will be set again. You don't have to manually execute all the code cells.
I am using Jupyter notebook on Google Cloud Platform VM instance.
I finish work, stop the instance and restart vm and Jupyter Notebook the very next morning, I have to rerun all the codes from the top which is annoying because I have to load all the dataset and that takes good 30 minutes.
I googled around and found that these codes would work, but even though I have it on the top of my notebook still the same problem occurs.
%reload_ext autoreload
%autoreload 2
Is there any way to keep everything in the jupyter notebook so that I can just pick up and run it?
For anyone interested,
I used python library called 'dill'
dill.dump_session('notebook_env.db')
dill.load_session('notebook_env.db')
I use Jupyter Notebook to run a series of experiments that take some time.
Certain cells take way too much time to execute so it's normal that I'd like to close the browser tab and come back later. But when I do the kernel interrupts running.
I guess there is a workaround for this but I can't find it
The simplest workaround to this seems to be the built-in cell magic %%capture:
%%capture output
# Time-consuming code here
Save, close tab, come back later. The output is now stored in the output variable:
output.show()
This will show all interim print results as well as the plain or rich output cell.
TL;DR:
Code doesn't stop on tab closes, but the output can no longer find the current browser session and loses data on how it's supposed to be displayed, causing it to throw out all new output received until the code finishes that was running when the tab closed.
Long Version:
Unfortunately, this isn't implemented (Nov 24th). If there's a workaround, I can't find it either. (Still looking, will update with news.) There is a workaround that saves output then reprints it, but won't work if code is still running in that notebook. An alternative would be to have a second notebook that you can get the output in.
I also need this functionality, and for the same reason. The kernel doesn't shut down or interrupt on tab closes. And the code doesn't stop running when you close a tab. The warning given is exactly correct, "The kernel is busy, outputs may be lost."
Running
import time
a = 0
while a < 100:
a+=1
print(a)
time.sleep(1)
in one box, then closing the tab, opening it up again, and then running
print(a)
from another box will cause it to hang until the 100 seconds have finished and the code completes, then it will print 100.
When a tab is closed, when you return, the python process will be in the same state you left it (when the last save completed). That was their intended behavior, and what they should have been more clear about in their documentation. The output from the run code actually gets sent to the browser upon reopening it, (lost the reference that explains this,) so hacks like the one in this comment will work as it can receive those and just throw them into some cell.
Output is kind of only saved in an accessible way through the endpoint connection. They've been working on this for a while (before Jupyter), although I cannot find the current bug in the Jupyter repository (this one references it, but is not it).
The only general workaround seems to be finding a computer you can always leave on, and leaving that on the page while it runs, then remote in or rely on autosave to be able to access it elsewhere. This is a bad way to do it, but unfortunately, the way I have to for now.
Related questions:
Closed IPython Notebook that was running code
Confirms that output will not be updated, but does not mention the interrupt functionality.
IPython Notebook - Keep printing to notebook output after closing browser
Offers a workaround in a link. Referenced above
First, install
runipy
pip install runipy
And now run your notebook in the background with the below command:
nohup runipy YourNotebook.ipynb OutputNotebook.ipynb >> notebook.log &
now the output file will be saved and also you can see the logs while running with:
tail -f notebook.log
I am struggling with this issue as well for some time now.
My workaround was to write all my logs to a file, so that when my browser closes (indeed when a lot of logs come through browser it hangs up too) I can see the kernel job process by opening the log file (the log file can be open using Jupyter too).
#!/usr/bin/python
import time
import datetime
import logging
logger = logging.getLogger()
def setup_file_logger(log_file):
hdlr = logging.FileHandler(log_file)
formatter = logging.Formatter('%(asctime)s %(levelname)s %(message)s')
hdlr.setFormatter(formatter)
logger.addHandler(hdlr)
logger.setLevel(logging.INFO)
def log(message):
#outputs to Jupyter console
print('{} {}'.format(datetime.datetime.now(), message))
#outputs to file
logger.info(message)
setup_file_logger('out.log')
for i in range(10000):
log('Doing hard work here i=' + str(i))
log('Taking a nap now...')
time.sleep(1000)
With JupyterLab:
This is not a problem if you are using JupyterLab (with current release v3.x.x).
To be more specific, not a problem means that, after we close the tab/browser, the notebook's kernel is kept running (so long as the jupyter server/your terminal is not closed). But the printing output of the cell (if there is any) is interrupted.
So, when we reopen the notebook, variables and etc. are all kept and updated, except the interrupted printing output.
If you care about the printing info in this case, you could try to logging it to a file. OR try using Jupyter's execute API (see below).
With Jupyter Notebook:
If you are still sticking with legacy (e.g. version 5.x/6.x) Jupyter Notebook, well, it is still not possible in the past (i.e prior to 2022).
BUT, with the planned new Notebook v7 release, by reusing the the JupyterLab codebase, this problem will also be solved in the new Jupyter Notebook.
So, try using JupyterLab or wait and updating to Notebook v7:
$ jupyter lab --version
$ 3.4.4
$ # OR waite and update the notebook, untill
$ # make sure the installed version of notebook is v7
$ jupyter notebook --version
$ 6.4.12
With Jupyter's execute API:
Other workaround is by using Jupyter's execute API:
$ jupyter nbconvert --to notebook --execute mynotebook.ipynb
This is like running the notebook as a .py file, i.e. from the command line, not a web browser UI mode.
After its execution, a new file named mynotebook.nbconvert.ipynb will be produced, and all printing output will be kept in it, but all variables will be lost. What we could do is pickling the variables that we care about.
And I don't think using runipy is still a good choice, since it's deprecated and unmaintained (after Jupyter's execute API).
ref:
Q: is it possible to make a jupyter notebook run even if the page is closed?
A: This is being solved in JupyterLab and will be solved in the future Notebook v7 release.
If you've set all cells to run and want to periodically check what's being printed, the following code would be a better option than %%capture. You can always open up the log file while kernel is busy.
import sys
sys.stdout = open("my_log.txt", "a")
I've constructed this awhile ago using jupyter nbconvert, essentially running a notebook in the background without any UI:
nohup jupyter nbconvert --ExecutePreprocessor.timeout=-1 --CodeFoldingPreprocessor.remove_folded_code=False --ExecutePreprocessor.allow_errors=True --ExecutePreprocessor.kernel_name=python3 --execute --to notebook --inplace ~/mynotebook.ipynb > ~/stdout.log 2> ~/stderr.log &
timeout=-1 no time out
remove_folded_code=False if you have Codefolding extension enabled
allow_errors=True ignore errored cells and continue running the notebook to the end
kernel_name if you have multiple kernels, check with jupyter kernelspec list