Convert ipython notebook to directly-executable python script - python

I have a jupyter/ipython notebook that I am using for prototyping and tutoring.
I export it as a python script using the menu dropdown or nbconvert, i.e.
ipython nbconvert --to python notebook.ipynb
However, I would like to make notebook.py executable directly without having to hack it by hand each time, in order that I can keep updating notebook.ipynb and overwriting notebook.py with my changes. I also want to include command-line arguments in notebook.py. My boilerplate is, for example:
#!/usr/bin/env ipython
import sys
x=sys.argv[-1]
with chmod +x notebook.py of course.
One route could be to make these lines (be they python or command-line directives) ignorable in the jupyter/ipython notebook - is there a way to do this by e.g. detecting the jupyter/ipython environment?
Edit1: This is tantamount to saying:
How can I include lines in the notebook.ipynb that will be ignored in the notebook environment but parsed in notebook.py generated from it?
Edit2: This question is a partial answer, but doesn't tell me how to include the #!/usr/bin/env ipython line: How can I check if code is executed in the IPython notebook?
Edit3: Could be useful, but only if %%bash /usr/bin/env ipython would work - would it..? How do I provide inline input to an IPython (notebook) shell command?
Edit4: Another attempted answer (subtle): Since # is a comment in python, putting #!/usr/bin/env ipython in the first cell of the notebook means that it will be ignored in jupyter/ipython, but respected in the exported notebook.py. However, the #! directive is not at the top, but can easily be chopped off:
> more notebook.py
# coding: utf-8
# In[1]:
#!/usr/bin/env ipython
# In[2]:
print 'Hello'
# In[ ]:

The answer turned out to be rather straightforward.
Part 1 - Making the exported notebook.py directly executable:
As described here, nbconvert can be customized with arbitrary templates.
So create a file hashbang.tpl containing:
#!/usr/bin/env ipython
{% extends 'python.tpl'%}
Then at the command line execute:
jupyter nbconvert --to python 'notebook.ipynb' --stdout --template=hashbang.tpl > notebook.py
Hey presto:
> more notebook.py
#!/usr/bin/env ipython
# coding: utf-8
# In[1]:
print 'Hello'
...
Part 2 - Detecting the notebook environment:
This answer from https://stackoverflow.com/a/39662359/1021819 should do it, i.e. use the following function to test for the notebook environment:
def isnotebook():
# From https://stackoverflow.com/a/39662359/1021819
try:
shell = get_ipython().__class__.__name__
if shell == 'ZMQInteractiveShell':
return True # Jupyter notebook or qtconsole
elif shell == 'TerminalInteractiveShell':
return False # Terminal running IPython
else:
return False # Other type (?)
except NameError:
return False # Probably standard Python interpreter

Related

Send the output from a startup script to any new notebook in Jupyter Lab

I have a startup file that looks like this
if os.getenv("ENV") == "prod" or os.getenv("MASTER_DB_HOST") == "db.domain.com":
from colorama import Fore, Style
print(f"{Fore.RED}WARNING!!!")
print(f"{Style.DIM}It seems you are running in a PROD like environment (or against the PROD database)")
from my_app.models import Foo, Bar
So when I run IPython this warning message is shown if needed
When I use jupyter lab this script is also executed (because the imported objects are available ) but I would like to get the output in the first cell of any notebook. Is this possible?
Alternatively, could I define a default notebook to start with and run it automatically?

Anaconda update through IPython console in Spyder open script in text editor

I have a problem concerning the update and the command used in Python to update packages or even updating conda itself. Everytime I use the command conda update something, I have the conda-script.py which is opened by sublime-text (my default text editor).
# -*- coding: utf-8 -*-
import sys
# Before any more imports, leave cwd out of sys.path for internal 'conda shell.*' commands.
# see https://github.com/conda/conda/issues/6549
if len(sys.argv) > 1 and sys.argv[1].startswith('shell.') and sys.path and sys.path[0] == '':
# The standard first entry in sys.path is an empty string,
# and os.path.abspath('') expands to os.getcwd().
del sys.path[0]
if __name__ == '__main__':
from conda.cli import main
sys.exit(main())
Then the update in Ipython stay in pending but never start and never finish until I close the text editor manually and then the following message appears:
Note: you may need to restart the kernel to use updated packages.
This indeed, did not update anything. I tried to reinstall anaconda but nothing changed.

Automatically convert jupyter notebook to .py

I know there have been a few questions about this but I have not found anything robust enough.
Currently I am using, from terminal, a command that creates .py, then moves them to another folder:
jupyter nbconvert --to script '/folder/notebooks/notebook.ipynb' && \
mv ./folder/notebooks/*.py ./folder/python_scripts && \
The workflow then is to code in a notebook, check with git status what changed since last commit, create a potentially huge number of nbconvert commands, then move them all.
I would like to use something like !jupyter nbconvert --to scriptfound in this answer, but without the cell that crates the python file appearing in the .py itself.
Because if that line appears, my code won't ever work right.
So, is there a proper way of dealing with this problem? One that can be automated, and not manually copying files names, creating the command, executing and then starting again.
You can add the following code in the last cell in your notebook file.
!jupyter nbconvert --to script mycode.ipynb
with open('mycode.py', 'r') as f:
lines = f.readlines()
with open('mycode.py', 'w') as f:
for line in lines:
if 'nbconvert --to script' in line:
break
else:
f.write(line)
It will generate the .py file and then remove this very code from it. You will end up with a clean script that will not call !jupyter nbconvert anymore.
Another way would be to use Jupytext as extension for your jupyter installation (can be easily pip installed).
Jupytext Description (see github page)
Have you always wished Jupyter notebooks were plain text documents?
Wished you could edit them in your favorite IDE? And get clear and
meaningful diffs when doing version control? Then... Jupytext may well
be the tool you're looking for!
It will keep paired notebooks in sync with .py files. You then just need to move your .py files or gitignore the notebooks for example as possible workflows.
Go to File > Save and Export Notebook as... > Executable Scripts
This is the closest I have found to what I had in mind, but I have yet to try and implement it:
# A post-save hook to make a script equivalent whenever the notebook is saved (replacing the --script option in older versions of the notebook):
import io
import os
from notebook.utils import to_api_path
_script_exporter = None
def script_post_save(model, os_path, contents_manager, **kwargs):
"""convert notebooks to Python script after save with nbconvert
replaces `jupyter notebook --script`
"""
from nbconvert.exporters.script import ScriptExporter
if model['type'] != 'notebook':
return
global _script_exporter
if _script_exporter is None:
_script_exporter = ScriptExporter(parent=contents_manager)
log = contents_manager.log
base, ext = os.path.splitext(os_path)
script, resources = _script_exporter.from_filename(os_path)
script_fname = base + resources.get('output_extension', '.txt')
log.info("Saving script /%s", to_api_path(script_fname, contents_manager.root_dir))
with io.open(script_fname, 'w', encoding='utf-8') as f:
f.write(script)
c.FileContentsManager.post_save_hook = script_post_save
Additionally, this looks like it has worked to some user on github, so I put it here for reference:
import os
from subprocess import check_call
def post_save(model, os_path, contents_manager):
"""post-save hook for converting notebooks to .py scripts"""
if model['type'] != 'notebook':
return # only do this for notebooks
d, fname = os.path.split(os_path)
check_call(['ipython', 'nbconvert', '--to', 'script', fname], cwd=d)

Run .py script via sh import module error

This is a very basic question on how to code in python and run your script from a very beginner.
I'm writing a script using Xcode9.4.1 which is supposed to be for python3.6. I then have an sh script run.sh, in the same folder of the script (say "my_folder") which simply looks like
python my_script.py
The python script looks like
from tick.base import TimeFunction
import numpy as np
import matplotlib.pyplot as plt
v = np.arange(0., 10., 1.)
f_v = v + 1
u = TimeFunction((v, f_v))
plt.plot(v, u.value(v))
print('donne!\n')
But as I try to run my_script.sh from the terminal I get a "ImportError: No module named tick.base" error.
But the tick folder is actually present in "my_computer/anaconda3/lib/python3.6/site-packages" and up to last week I was using Spyder from anaconda navigator and everything was correctly working, so no "import error" occurred.
The question is quite trivial, in some sense it simply is "what's the typical procedure to code and run python script and how modules are supposed to be imported-downloaded when running on a given machine?"
I need it since my script is to be run on another machine through ssh and using my laptop to make some attempts. Up to last year I used to work in C and only need to move some folders with code and .h files.
Thank for help!
EDIT 1:
From the Spyder 3.2.7 setting, where the script was giving non problem, I printed the
import sys
print(sys.path)
The -manually- copied the content to the sys.path variable in my_script.py and rerun 'run.sh' and now getting a new (strange) error:
Traceback (most recent call last):
[...]
File "/Users/my_computer/anaconda3/lib/python3.6/site-packages/tick/array/build/array.py", line 106
def tick_double_array_to_file(_file: 'std::string', array: 'ArrayDouble const &') -> "void":
^
SyntaxError: invalid syntax
First, check the python which you are calling the script with is pointing to the anaconda python and it is of the same version you are expecting it to be. You can do "which python" command in Linux and Mac to which the path which points to python. It if is pointing to some different version or build of python than the one which you are expecting then add the needed path to the system environment PATH variable. In Linux and Mac this can be done by adding the following line in the .bashrc file at the /home/ folder:
export PATH=/your/python/path:$PATH
And then source the .bashrc file.
source .bashrc
If you are on a operating system like cent os ,breaking the default python path can break your yum so be careful before changing it.
I am running a script in PyCharm and under the Project Interpretor I have the path
C:\envs\conda\keras2\python.exe
When I try to run the script via ssh on the server I get a 'no module named' error. I get
/usr/bin/python as the ans to 'which python' on the server itself. Could you tell me which path I must add for the script to run properly?

how to start django shell with ipython in qtconsole mode?

When i start django shell by typing python manage.py shell
the ipython shell is started. Is it possible to make Django start ipython in qtconsole mode? (i.e. make it run ipython qtconsole)
Arek
edit:
so I'm trying what Andrew Wilkinson suggested in his answer - extending my django app with a command which is based on original django shell command. As far as I understand code which starts ipython in original version is this:
from django.core.management.base import NoArgsCommand
class Command(NoArgsCommand):
requires_model_validation = False
def handle_noargs(self, **options):
from IPython.frontend.terminal.embed import TerminalInteractiveShell
shell = TerminalInteractiveShell()
shell.mainloop()
any advice how to change this code to start ipython in qtconsole mode?
second edit:
what i found and works so far is - start 'ipython qtconsole' from the location where settings.py of my project is (or set the sys.path if starting from different location), and then execute this:
import settings
import django.core.management
django.core.management.setup_environ(settings)
and now can i import my models, list all instances etc.
The docs here say:
If you'd rather not use manage.py, no problem. Just set the
DJANGO_SETTINGS_MODULE environment variable to mysite.settings and run
python from the same directory manage.py is in (or ensure that
directory is on the Python path, so that import mysite works).
So it should be enough to set that environment variable and then run ipython qtconsole. You could make a simple script to do this for you automatically.
I created a shell script with the following:
/path/to/ipython
qtconsole --pylab inline -c "run /path/to/my/site/shell.py"
You only need the --pylab inline part if you want the cool inline matplotlib graphs.
And I created a python script shell.py in /path/to/my/site with:
import os
working_dir = os.path.dirname(__file__)
os.chdir(working_dir)
import settings
import django.core.management
django.core.management.setup_environ(settings)
Running my shell script gets me an ipython qtconsole with the benefits of the django shell.
You can check the code that runs the shell here. You'll see that there is no where to configure what shell is run.
What you could do is copy this file, rename it as shell_qt.py and place it in your own project's management/commands directory. Change it to run the QT console and then you can run manage.py shell_qt.
Since Django version 1.4, usage of django.core.management.setup_environ() is deprecated. A solution that works for both the IPython notebook and the QTconsole is this (just execute this from within your Django project directory):
In [1]: from django.conf import settings
In [2]: from mydjangoproject.settings import DATABASES as MYDATABASES
In [3]: settings.configure(DATABASES=MYDATABASES)
Update: If you work with Django 1.7, you additionally need to execute the following:
In [4]: import django; django.setup()
Using django.conf.settings.configure(), you specify the database settings of your project and then you can access all your models in the usual way.
If you want to automate these imports, you can e.g. create an IPython profile by running:
ipython profile create mydjangoproject
Each profile contains a directory called startup. You can put arbitrary Python scripts in there and they will be executed just after IPython has started. In this example, you find it under
~/.ipython/profile_<mydjangoproject>/startup/
Just put a script in there which contains the code shown above, probably enclosed by a try..except clause to handle ImportErrors. You can then start IPython with the given profile like this:
ipython qtconsole --profile=mydjangoproject
or
ipython notebook --profile=mydjangoproject
I also wanted to open the Django shell in qtconsole. Looking inside manage.py solve the problem for me:
Launch IPython qtconsole, cd to the project base directory and run:
import os
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "myproject.settings")
Dont forget to change 'myproject' to your project name.
You can create a command that extends the base shell command and imports the IPythonQtConsoleApp like so:
create file qtshell.py in yourapp/management/commands with:
from django.core.management.commands import shell
class Command(shell.Command):
def _ipython(self):
"""Start IPython Qt console"""
from IPython.qt.console.qtconsoleapp import IPythonQtConsoleApp
app = IPythonQtConsoleApp.instance()
app.initialize(argv=[])
app.start()
then just use python manage.py qtshell
A somewhat undocumented feature of shell_plus is the ability to run it in "kernel only mode". This allows us to connect to it from another shell, such as one running qtconsole.
For example, in one shell do:
django-admin shell_plus --kernel
# or == ./manage.py shell_plus --kernel
This will print out something like:
# Shell Plus imports ...
...
To connect another client to this kernel, use:
--existing kernel-23600.json
Then, in another shell run:
ipython qtconsole --existing kernel-23600.json
This should now open a QtConsole. One other tip, instead of running another shell, you can also hit Ctrl+Z, and run bg to tell current process to run in background.
You can install django extensions and then run
python manage.py shell_plus --ipython

Categories

Resources