I use Jupyter Notebook to run a series of experiments that take some time.
Certain cells take way too much time to execute so it's normal that I'd like to close the browser tab and come back later. But when I do the kernel interrupts running.
I guess there is a workaround for this but I can't find it
The simplest workaround to this seems to be the built-in cell magic %%capture:
%%capture output
# Time-consuming code here
Save, close tab, come back later. The output is now stored in the output variable:
output.show()
This will show all interim print results as well as the plain or rich output cell.
TL;DR:
Code doesn't stop on tab closes, but the output can no longer find the current browser session and loses data on how it's supposed to be displayed, causing it to throw out all new output received until the code finishes that was running when the tab closed.
Long Version:
Unfortunately, this isn't implemented (Nov 24th). If there's a workaround, I can't find it either. (Still looking, will update with news.) There is a workaround that saves output then reprints it, but won't work if code is still running in that notebook. An alternative would be to have a second notebook that you can get the output in.
I also need this functionality, and for the same reason. The kernel doesn't shut down or interrupt on tab closes. And the code doesn't stop running when you close a tab. The warning given is exactly correct, "The kernel is busy, outputs may be lost."
Running
import time
a = 0
while a < 100:
a+=1
print(a)
time.sleep(1)
in one box, then closing the tab, opening it up again, and then running
print(a)
from another box will cause it to hang until the 100 seconds have finished and the code completes, then it will print 100.
When a tab is closed, when you return, the python process will be in the same state you left it (when the last save completed). That was their intended behavior, and what they should have been more clear about in their documentation. The output from the run code actually gets sent to the browser upon reopening it, (lost the reference that explains this,) so hacks like the one in this comment will work as it can receive those and just throw them into some cell.
Output is kind of only saved in an accessible way through the endpoint connection. They've been working on this for a while (before Jupyter), although I cannot find the current bug in the Jupyter repository (this one references it, but is not it).
The only general workaround seems to be finding a computer you can always leave on, and leaving that on the page while it runs, then remote in or rely on autosave to be able to access it elsewhere. This is a bad way to do it, but unfortunately, the way I have to for now.
Related questions:
Closed IPython Notebook that was running code
Confirms that output will not be updated, but does not mention the interrupt functionality.
IPython Notebook - Keep printing to notebook output after closing browser
Offers a workaround in a link. Referenced above
First, install
runipy
pip install runipy
And now run your notebook in the background with the below command:
nohup runipy YourNotebook.ipynb OutputNotebook.ipynb >> notebook.log &
now the output file will be saved and also you can see the logs while running with:
tail -f notebook.log
I am struggling with this issue as well for some time now.
My workaround was to write all my logs to a file, so that when my browser closes (indeed when a lot of logs come through browser it hangs up too) I can see the kernel job process by opening the log file (the log file can be open using Jupyter too).
#!/usr/bin/python
import time
import datetime
import logging
logger = logging.getLogger()
def setup_file_logger(log_file):
hdlr = logging.FileHandler(log_file)
formatter = logging.Formatter('%(asctime)s %(levelname)s %(message)s')
hdlr.setFormatter(formatter)
logger.addHandler(hdlr)
logger.setLevel(logging.INFO)
def log(message):
#outputs to Jupyter console
print('{} {}'.format(datetime.datetime.now(), message))
#outputs to file
logger.info(message)
setup_file_logger('out.log')
for i in range(10000):
log('Doing hard work here i=' + str(i))
log('Taking a nap now...')
time.sleep(1000)
With JupyterLab:
This is not a problem if you are using JupyterLab (with current release v3.x.x).
To be more specific, not a problem means that, after we close the tab/browser, the notebook's kernel is kept running (so long as the jupyter server/your terminal is not closed). But the printing output of the cell (if there is any) is interrupted.
So, when we reopen the notebook, variables and etc. are all kept and updated, except the interrupted printing output.
If you care about the printing info in this case, you could try to logging it to a file. OR try using Jupyter's execute API (see below).
With Jupyter Notebook:
If you are still sticking with legacy (e.g. version 5.x/6.x) Jupyter Notebook, well, it is still not possible in the past (i.e prior to 2022).
BUT, with the planned new Notebook v7 release, by reusing the the JupyterLab codebase, this problem will also be solved in the new Jupyter Notebook.
So, try using JupyterLab or wait and updating to Notebook v7:
$ jupyter lab --version
$ 3.4.4
$ # OR waite and update the notebook, untill
$ # make sure the installed version of notebook is v7
$ jupyter notebook --version
$ 6.4.12
With Jupyter's execute API:
Other workaround is by using Jupyter's execute API:
$ jupyter nbconvert --to notebook --execute mynotebook.ipynb
This is like running the notebook as a .py file, i.e. from the command line, not a web browser UI mode.
After its execution, a new file named mynotebook.nbconvert.ipynb will be produced, and all printing output will be kept in it, but all variables will be lost. What we could do is pickling the variables that we care about.
And I don't think using runipy is still a good choice, since it's deprecated and unmaintained (after Jupyter's execute API).
ref:
Q: is it possible to make a jupyter notebook run even if the page is closed?
A: This is being solved in JupyterLab and will be solved in the future Notebook v7 release.
If you've set all cells to run and want to periodically check what's being printed, the following code would be a better option than %%capture. You can always open up the log file while kernel is busy.
import sys
sys.stdout = open("my_log.txt", "a")
I've constructed this awhile ago using jupyter nbconvert, essentially running a notebook in the background without any UI:
nohup jupyter nbconvert --ExecutePreprocessor.timeout=-1 --CodeFoldingPreprocessor.remove_folded_code=False --ExecutePreprocessor.allow_errors=True --ExecutePreprocessor.kernel_name=python3 --execute --to notebook --inplace ~/mynotebook.ipynb > ~/stdout.log 2> ~/stderr.log &
timeout=-1 no time out
remove_folded_code=False if you have Codefolding extension enabled
allow_errors=True ignore errored cells and continue running the notebook to the end
kernel_name if you have multiple kernels, check with jupyter kernelspec list
The example shows:
I create a simple module (fibonacci calculator)
I start a pycharm console, import the module, run the function inside console, and it works.
Now I edit some print text in the module.
Go back to the console and run "import fibagain"
The console seems to do this without complaining.
But when I run the fib() function, it is still giving me results from the earlier version. I cannot make the console see the updated version of the fibagain.py file.
If I delete the console and open it again, then 'import fibagain', running fib(3) will give me the latest version.
sorry, but not permitted to post proper image links here. This address shows the screencapture:
Instead of import again, you want:
reload(fibagain)
This will reload the updated module. (Note: This only works if fibagain had been imported some time earlier)
I see how to change certain settings for matplotlib in such a way that the are used to configure it each time I launch, including when I launch interactively with
ipython --pylab
but I'm not sure how run arbitrary code each time I launch in this way, or how to ensure that certain packages have been imported. For example I'd like to do the following whenever I launch as above:
from mpltools import style
style.use('ggplot')
How do I load and run specific packages when I launch 'matplotlib'?
From the ipython website it seems you can place any .py file in the startup folder of your profile and it will be run when ipython is initiated. For help finding the profile where your startup folder is [see here].
To get the effect your looking for you need to check that matplotlib has been imported, the only way I can think of doing that is to add the following
import sys
if "matplotlib" in sys.modules:
# Do something
print
print "This seems to be working"
I have left it with the test so you can quickly determine if it is working.
Two things to note: I am using ipython 2.0.0-dev (in case this has any bearing), and secondly I suspect that calling sys.modules is not recommended, or at least that was the impression I had off the SO post I borrowed the idea from, perhaps someone better than I can enlighten us.
I know this has been discussed here before, but I haven't found a solution that will work for me. I already have a python script that I wrote, and I currently have it run at boot. What I would like to do is log all outputs, which include any print statements to the console, or any error messages that would come up. I do like the Logging module, and would prefer to use that over looking at all outputs on the console. Any suggestions?
If you manage your script using supervisor it will automatically handle all logging of stdout/stderr for you.
Additionally, it can automatically restart your script if it were to crash
I have a simple question. I am in the process of debugging some code. I am using Enthought Python, with the "PyLab" program. I edit my code using gEdit. I am using Ubuntu 10.04.4 LTS.
I use "run myfile.py" to run the program. Then I test myfile(somearguments), and see where the bugs are.
However, when I make changes to the code, using "run myfile.py" again does not properly update what Python/PyLab on the changes to my code. The result is that I will get error messages back pointing to lines that have no errors, and don't even have the "trouble" text in them anymore. I tried using import and reload as well, but that didn't work.
How do I get Python/PyLab to see the new changes to my code? The only option I have for now is to fix the bug and then restart PyLab to confirm the fix.
Thanks!
Did you try to remove the pyc file ?
It may happen as the pyc file exists that PyLab keeps reading it without reloading the file.