I've been trying to figure out how to get save and read in the history of my Python commands in a ptpython console, but haven't been able to do so. All of my efforts have so far been variations of this answer. However, I still am not able to read in my history.
I would simply like to be able to press the ↑ and ↓ arrows to go through my Python commands from a previous console session (not the current console session that I'm in). Here's what I currently have in my $PYTHONSTARTUP file:
# Add auto-completion and a stored history file of commands to your Python
# interactive interpreter. Requires Python 2.0+, readline. Autocomplete is
# bound to the Esc key by default (you can change it - see readline docs).
#
# Store the file in ~/.pystartup, and set an environment variable to point
# to it: "export PYTHONSTARTUP=/home/user/.pystartup" in bash.
#
# Note that PYTHONSTARTUP does *not* expand "~", so you have to put in the
# full path to your home directory.
import atexit
import os
import readline
import rlcompleter
import sys
try:
from ptpython.repl import embed
except ImportError:
print('ptpython is not available: falling back to standard prompt')
else:
sys.exit(embed(globals(), locals()))
historyPath = os.path.expanduser("~/.ptpython/history")
def save_history(historyPath=historyPath):
import readline
readline.write_history_file(historyPath)
if os.path.exists(historyPath):
readline.read_history_file(historyPath)
atexit.register(save_history)
readline.parse_and_bind('tab: complete')
del os, atexit, readline, rlcompleter, save_history, historyPath
And my $PYTHONSTARTUP variable is:
$ echo $PYTHONSTARTUP
/Users/[redacted]/.pystartup
I'm on Python 3.7.3, macOS 10.14.6, and ptpython 2.0.4.
Thanks
If you check source code for embed then you see option history_filename=
embed(globals(), locals(), history_filename=historyPath)
import os
try:
from ptpython.repl import embed
except ImportError:
print('ptpython is not available: falling back to standard prompt')
else:
history_path = os.path.expanduser("~/.ptpython/history")
embed(globals(), locals(), history_filename=history_path)
BTW: If folder ~/.ptpython doesn't exists then you will have to create it before run code.
EDIT (2022):
import os
try:
from ptpython.repl import embed
except ImportError:
print('ptpython is not available: falling back to standard prompt')
else:
history_dir = os.path.expanduser("~/.ptpython")
history_path = os.path.join(history_dir, "history")
if not os.path.exists(history_path):
os.makedirs(history_dir, exist_ok=True) # create folder if not exist
open(history_path, 'a').close() # create empty file
embed(globals(), locals(), history_filename=history_path)
Related
I have a Python jupyter notebook, which I can successfully export to HTML with a table of content through the command line:
$ jupyter nbconvert nb.ipynb --template toc2
How do I do the same, but programmatically (via API)?
This is what I achieved so far:
import os
import nbformat
from nbconvert import HTMLExporter
from nbconvert.preprocessors import ExecutePreprocessor
nb_path = './nb.ipynb'
with open(nb_path) as f:
nb = nbformat.read(f, as_version=4)
ep = ExecutePreprocessor(kernel_name='python3')
ep.preprocess(nb)
exporter = HTMLExporter()
html, _ = exporter.from_notebook_node(nb)
output_html_file = f"./nb.html"
with open(output_html_file, "w") as f:
f.write(html)
f.close()
print(f"Result HTML file: {output_html_file}")
It does successfully export the HTML; however without the table of content. I don't know how to set the --template toc2 through the API.
I found two ways to do this:
The way that most faithfully reproduces $ jupyter nbconvert nb.ipynb --template toc2 involves setting the HTMLExporter().template_file attribute with the toc2.tpl template file.
The main trick is to find where this file lives on your system. For me it was <base filepath>/Anaconda3/Lib/site-packages/jupyter_contrib_nbextensions/templates/toc2.tpl
Full code below:
from nbconvert import HTMLExporter
from nbconvert.writers import FilesWriter
import nbformat
from pathlib import Path
input_notebook = "My_notebook.ipynb"
output_html ="My_notebook"
toc2_tpl_path = "<base filepath>/Anaconda3/Lib/site-packages/jupyter_contrib_nbextensions/templates/toc2.tpl"
notebook_node = nbformat.read(input_notebook, as_version=4)
exporter = HTMLExporter()
exporter.template_file = toc2_tpl_path # THIS IS THE CRITICAL LINE
(body, resources) = exporter.from_notebook_node(notebook_node)
write_file = FilesWriter()
write_file.write(
output=body,
resources=resources,
notebook_name=output_html
)
An alternative approach is to use the TocExporter class in the nbconvert_support module, instead of HTMLExporter.
However, this mimics the command line expression jupyter nbconvert --to html_toc nb.ipynb rather than instead setting the template of the standard HTML export method
The main issue with this approach is that there does not seem to be a way to embed figures with this method, which is the default for the template-based method above
However, if figure embedding doesn't matter, this solution is more flexible across different systems since you don't have to track down different file paths for toc2.tpl
Here is an example below:
from nbconvert import HTMLExporter
from nbconvert.writers import FilesWriter
import nbformat
from pathlib import Path
from jupyter_contrib_nbextensions.nbconvert_support import TocExporter # CRITICAL MODULE
input_notebook = "My_notebook.ipynb"
output_html ="My_notebook"
notebook_node = nbformat.read(input_notebook, as_version=4)
exporter = TocExporter() # CRITICAL LINE
(body, resources) = exporter.from_notebook_node(notebook_node)
write_file = FilesWriter()
write_file.write(
output=body,
resources=resources,
notebook_name=output_html
)
As a final note, I wanted to mention my motivation for doing this to anyone else who comes across this answer. One of the machines I work on uses Windows, so to get the command prompt to run jupyter commands requires some messing with the Windows PATH environment, which was turning in to a headache. I could get around this by using the Anaconda prompt, but this requires opening up the prompt and typing in the full command every time. I could try to write a script with os.system(), but this calls the default command line (Windows command prompt) not the Anaconda prompt. The methods above allow me to convert Jupyter notebooks to HTML with TOCs and embedded figures by running a simple python script from within any notebook.
This was not clear in the documentation, but the constructor of the TemplateExporter class mentions the folowing:
template_file : str (optional, kw arg)
Template to use when exporting.
After testing it, I can confirm the all you need to do is add the filepath to the template file under this argument for you exporter.
HTMLExporter(template_file=path_to_template_file)
I'm new to Python. What I want to do is after setup esptool package (pip install esptool) call its main method with a bunch of arguments in my application. Something like:
esptool.py -p /dev/ttyUSB0 write_flash -fm qio 0x0000
There is an issue I faced. esptool is not on the packages list in python to import (it is installed with pip already). How am I gonna user import and call the main method?
Resolving import issues
You can't simply invoke import esptool because esptool.py is an executable script, thus not meant to be imported like a plain module. However, there are workarounds for importing code from executable scripts; here are two I know of:
extending sys.path
You can extend the sys.path to include the bindir containing the esptool.py script. Simple check from command line:
$ PYTHONPATH=$(which esptool.py)/.. python -c "import esptool; esptool.main()"
should print you the usage help text.
Extending sys.path in code:
import os
import sys
try:
from shutil import which
except ImportError:
from distutils.spawn import find_executable as which
bindir = os.path.dirname(which('esptool.py'))
sys.path.append(bindir) # after this line, esptool becomes importable
import esptool
if __name__ == '__main__':
esptool.main()
using import machinery
You can avoid extending sys.path by using the mechanisms of importing Python code from arbitrary files. I like this solution more than fiddling with sys.path, but unfortunately, it is not portable between Python 2 and 3.
Python 3.5+
import importlib.machinery
import importlib.util
from shutil import which
if __name__ == '__main__':
loader = importlib.machinery.SourceFileLoader('esptool', which('esptool.py'))
spec = importlib.util.spec_from_loader(loader.name, loader)
esptool = importlib.util.module_from_spec(spec)
loader.exec_module(esptool) # after this line, esptool is imported
esptool.main()
Python 2.7
import imp
from distutils.spawn import find_executable as which
if __name__ == '__main__':
esptool = imp.load_source('esptool', which('esptool.py'))
esptool.main()
Passing command line arguments
The command line arguments are stored in sys.argv list, so you will have to temporarily overwrite it in order to pass the arguments to the main function:
# assuming esptool is imported
import sys
if __name__ == '__main__':
# save the current arguments
argv_original = sys.argv[:]
# overwrite arguments to be passed to esptool argparser
sys.argv[:] = ['', '-p', '/dev/ttyUSB0', 'write_flash', '-fm', 'qio', '0x0000']
try:
esptool.main()
except Exception:
# TODO deal with errors here
pass
finally: # restore original arguments
sys.argv[:] = argv_original
I am debugging code, which has this line:
run('python /home/some_user/some_repo/pyflights/usertools/import.py /home/some_user/some_repo/pyflights/config/index_import.conf flights.map --import')
run - is some analog of os.system
So, I want to run this code without using run function. I need to import my import.py file and run it with sys.args. But how can I do this?
from some_repo.pyflights.usertools import import
There is no way to import import because import is a keyword. Moreover, importing a python file is different from running a script because most scripts have a section
if __name__ == '__main__':
....
When the program is running as a script, the variable __name__ has value __main__.
If you are ready to call a subprocess, you can use
`subprocess.call(...)`
Edit: actually, you can import import like so
from importlib import import_module
mod = import_module('import')
however it won't have the same effect as calling the script. Notice that the script probably uses sys.argv, and this must be addressed too.
Edit: here is an ersatz that you can try if you really don't want a subprocess. I don't guarantee it will work
import shlex
import sys
import types
def run(args):
"""Runs a python program with arguments within the current process.
Arguments:
#args: a sequence of arguments, the first one must be the file path to the python program
This is not guaranteed to work because the current process and the
executed script could modify the python running environment in incompatible ways.
"""
old_main, sys.modules['__main__'] = sys.modules['__main__'], types.ModuleType('__main__')
old_argv, sys.argv = sys.argv, list(args)
try:
with open(sys.argv[0]) as infile:
source = infile.read()
exec(source, sys.modules['__main__'].__dict__)
except SystemExit as exc:
if exc.code:
raise RuntimeError('run() failed with code %d' % exc.code)
finally:
sys.argv, sys.modules['__main__'] = old_argv, old_main
command = '/home/some_user/some_repo/pyflights/usertools/import.py /home/some_user/some_repo/pyflights/config/index_import.conf flights.map --import'
run(shlex.split(command))
I have below py script to download the files from artifactory.
#!/usr/bin/python
# -*- coding: utf-8 -*-
import os
import tarfile
import urllib
from urllib import urlretrieve
import ConfigParser
Config = ConfigParser.ConfigParser()
Config.read('/vivek/release.conf')
code_version = Config.get('main', 'app_version')
os.chdir('/tmp/')
arti_st_url='http://repo.com/artifactory/libs-release- local/com/name/tgz/abc.ear/{0}/abc.ear-{0}.tar.gz'.format(code_version)
arti_st_name='abc.ear-{0}.tar.gz'.format(code_version)
arti_sl_url='http://repo.com/artifactory/libs-release- local/com/name/tgz/def.ear/{0}/def.ear-{0}.tar.gz'.format(code_version)
arti_sl_name='def.ear-{0}.tar.gz'.format(code_version)
urllib.urlretrieve(arti_st_url, arti_st_name)
urllib.urlretrieve(arti_sl_url, arti_sl_name)
oneEAR = 'abc.ear-{0}.tar.gz'.format(code_version)
twoEAR = 'def.ear-{0}.tar.gz'.format(code_version)
tar = tarfile.open(oneEAR)
tar.extractall()
tar.close()
tar1 = tarfile.open(twoEAR)
tar1.extractall()
tar1.close()
os.remove(oneEAR)
os.remove(twoEAR)
This script works perfectly, thanks to stackoverflow.
Here's the next question. There's a variable "protocol" in release.conf. If it's equal to "localcopy", there's an existing py script that does something. If the "protocol" is equal to "artifactory",
above script should be called and executed. How can I achieve it?
Note: I am a beginner in Python, but my tasks are not. So, please help me out guys.
You could simply use:
import os
os.system("script_path")
to execute the script file. But there should be a line called shebang in the very top of that script file, you want to execute. If your python interpreter would be in /usr/bin/python this would be:
#!/usr/bin/python
Assuming you are a Linux user.
In Windows shebang isn't supported. It determines what program to use running *.py file itself.
//Edit:
To call that two scripts depending on a property config value you could just make another script called for example runthis.py which contains instruction like:
protocol = Config.get('main', 'protocol')
if protocol == 'localcopy':
os.system('path_to_localcopy_script)
if protocol == 'antifactory':
os.system('path_to_other_script')
Dont forgot to import needed modules in that new script.
Then you just run script you just made.
That is one way to do this.
If you dont want to create additional script, then put that code you wrote in a function, like:
def main():
...
Your code
...
And on the very bottom of your script file write:
if __name__ = '__main__':
protocol = Config.get('main', 'app_version')
if protocol == 'localcopy':
main()
if protocol == 'antifactory':
os.system('path_to_other_script')
if __name__ = '__main__' would execute only if you run that script by yourself (not by call from an other sctipt for example)
I've been playing around with IPython.parallel and I wanted to use some custom modules of my own, but haven't been able to do it as explained on the cookbook using dview.sync_imports(). The only thing that has worked for me was something like
def my_parallel_func(args):
import sys
sys.path.append('/path/to/my/module')
import my_module
#and all the rest
and then in the main just to
if __name__=='__main__':
#set up dview...
dview.map( my_parallel_func, my_args )
The correct way to do this would in my opinion be something like
with dview.sync_imports():
import sys
sys.path.append('/path/to/my/module')
import my_module
but this throws an error saying there is no module named my_module.
So, what is the right way of doing it using dview.sync_imports()??
The problem is that you're changing the PYTHONPATH just in the local process running the Client, and not in the remote processes running in the ipcluster.
You can observe this behaviour if you run the next piece of code:
from IPython.parallel import Client
rc = Client()
dview = rc[:]
with dview.sync_imports():
import sys
sys.path[:] = ['something']
def parallel(x):
import sys
return sys.path
print 'Local: ', sys.path
print 'Remote: ', dview.map_sync(parallel, range(1))
Basically all the modules that you want to use with sync_imports must already be in the PYTHONPATH.
If it's not in the PYTHONPATH then you must add it to the path in the function that you execute remotely, and then import the module in the function.