How to set the argument "--template toc2" through nbconvert API? - python

I have a Python jupyter notebook, which I can successfully export to HTML with a table of content through the command line:
$ jupyter nbconvert nb.ipynb --template toc2
How do I do the same, but programmatically (via API)?
This is what I achieved so far:
import os
import nbformat
from nbconvert import HTMLExporter
from nbconvert.preprocessors import ExecutePreprocessor
nb_path = './nb.ipynb'
with open(nb_path) as f:
nb = nbformat.read(f, as_version=4)
ep = ExecutePreprocessor(kernel_name='python3')
ep.preprocess(nb)
exporter = HTMLExporter()
html, _ = exporter.from_notebook_node(nb)
output_html_file = f"./nb.html"
with open(output_html_file, "w") as f:
f.write(html)
f.close()
print(f"Result HTML file: {output_html_file}")
It does successfully export the HTML; however without the table of content. I don't know how to set the --template toc2 through the API.

I found two ways to do this:
The way that most faithfully reproduces $ jupyter nbconvert nb.ipynb --template toc2 involves setting the HTMLExporter().template_file attribute with the toc2.tpl template file.
The main trick is to find where this file lives on your system. For me it was <base filepath>/Anaconda3/Lib/site-packages/jupyter_contrib_nbextensions/templates/toc2.tpl
Full code below:
from nbconvert import HTMLExporter
from nbconvert.writers import FilesWriter
import nbformat
from pathlib import Path
input_notebook = "My_notebook.ipynb"
output_html ="My_notebook"
toc2_tpl_path = "<base filepath>/Anaconda3/Lib/site-packages/jupyter_contrib_nbextensions/templates/toc2.tpl"
notebook_node = nbformat.read(input_notebook, as_version=4)
exporter = HTMLExporter()
exporter.template_file = toc2_tpl_path # THIS IS THE CRITICAL LINE
(body, resources) = exporter.from_notebook_node(notebook_node)
write_file = FilesWriter()
write_file.write(
output=body,
resources=resources,
notebook_name=output_html
)
An alternative approach is to use the TocExporter class in the nbconvert_support module, instead of HTMLExporter.
However, this mimics the command line expression jupyter nbconvert --to html_toc nb.ipynb rather than instead setting the template of the standard HTML export method
The main issue with this approach is that there does not seem to be a way to embed figures with this method, which is the default for the template-based method above
However, if figure embedding doesn't matter, this solution is more flexible across different systems since you don't have to track down different file paths for toc2.tpl
Here is an example below:
from nbconvert import HTMLExporter
from nbconvert.writers import FilesWriter
import nbformat
from pathlib import Path
from jupyter_contrib_nbextensions.nbconvert_support import TocExporter # CRITICAL MODULE
input_notebook = "My_notebook.ipynb"
output_html ="My_notebook"
notebook_node = nbformat.read(input_notebook, as_version=4)
exporter = TocExporter() # CRITICAL LINE
(body, resources) = exporter.from_notebook_node(notebook_node)
write_file = FilesWriter()
write_file.write(
output=body,
resources=resources,
notebook_name=output_html
)
As a final note, I wanted to mention my motivation for doing this to anyone else who comes across this answer. One of the machines I work on uses Windows, so to get the command prompt to run jupyter commands requires some messing with the Windows PATH environment, which was turning in to a headache. I could get around this by using the Anaconda prompt, but this requires opening up the prompt and typing in the full command every time. I could try to write a script with os.system(), but this calls the default command line (Windows command prompt) not the Anaconda prompt. The methods above allow me to convert Jupyter notebooks to HTML with TOCs and embedded figures by running a simple python script from within any notebook.

This was not clear in the documentation, but the constructor of the TemplateExporter class mentions the folowing:
template_file : str (optional, kw arg)
Template to use when exporting.
After testing it, I can confirm the all you need to do is add the filepath to the template file under this argument for you exporter.
HTMLExporter(template_file=path_to_template_file)

Related

Streamlit + Spacy causing "AttributeError: 'PathDistribution' object has no attribute '_normalized_name'"

Note: This is not a duplicate question as I have gone through this answer and made the necessary package downgrade but it still results in the same error. Details below.
# System Details
MacBook Air (M1, 2020)
MacOS Monterey 12.3
Python 3.10.8 (Miniconda environment)
Relevant library versions from pip freeze
importlib-metadata==3.4.0
PyMuPDF==1.21.1
spacy==3.4.4
spacy-alignments==0.9.0
spacy-legacy==3.0.11
spacy-loggers==1.0.4
spacy-transformers==1.2.0
streamlit==1.17.0
flair==0.11.3
catalogue==2.0.8
# Setup
I am trying to use Spacy for some text processing over a pdf document uploaded to a Streamlit app.
The Streamlit app basically contains an upload button, submit button (which calls the preprocessing and spacy functions), and a text_area to display the processed text.
Here is the working code for uploading a pdf document and extracting its text -
import streamlit as st
import fitz
def load_file(file):
doc = fitz.open(stream=uploaded_file.read(), filetype="pdf")
text = []
with doc:
for page in doc:
text.append(page.get_text())
text = "\n".join(text)
return text
#####################################################################
st.title("Test app")
col1, col2 = st.columns([1,1], gap='small')
with col1:
with st.expander("Description -", expanded=True):
st.write("This is the description of the app.")
with col2:
with st.form(key="my_form"):
uploaded_file = st.file_uploader("Upload",type='pdf', accept_multiple_files=False, label_visibility="collapsed")
submit_button = st.form_submit_button(label="Process")
#####################################################################
col1, col2 = st.columns([1,3], gap='small')
with col1:
st.header("Metrics")
with col2:
st.header("Text")
if uploaded_file is not None:
text = load_file(uploaded_file)
st.text_area(text)
# Reproduce base code
install necessary libraries
save above code to a test.py file
from terminal navigate to folder and run streamlit run test.py
navigate to http://localhost:8501/ in browser
download this sample pdf and upload it to the app as an example
This results in a functioning app -
# Issue I am facing
Now, the issue comes when I add spacy to the python file using import spacy and rerun the streamlit app, this error pops up -
AttributeError: 'PathDistribution' object has no attribute '_normalized_name'
Traceback:
File "/Users/akshay_sehgal/miniconda3/envs/demo_ui/lib/python3.10/site-packages/streamlit/runtime/scriptrunner/script_runner.py", line 565, in _run_script
exec(code, module.__dict__)
File "/Users/akshay_sehgal/Library/CloudStorage/________/Documents/Code/Demo UI/Streamlit/keyphrase_extraction_template/test.py", line 3, in <module>
import spacy
File "/Users/akshay_sehgal/miniconda3/envs/demo_ui/lib/python3.10/site-packages/spacy/__init__.py", line 6, in <module>
from .errors import setup_default_warnings
File "/Users/akshay_sehgal/miniconda3/envs/demo_ui/lib/python3.10/site-packages/spacy/errors.py", line 2, in <module>
from .compat import Literal
File "/Users/akshay_sehgal/miniconda3/envs/demo_ui/lib/python3.10/site-packages/spacy/compat.py", line 3, in <module>
from thinc.util import copy_array
File "/Users/akshay_sehgal/miniconda3/envs/demo_ui/lib/python3.10/site-packages/thinc/__init__.py", line 5, in <module>
from .config import registry
File "/Users/akshay_sehgal/miniconda3/envs/demo_ui/lib/python3.10/site-packages/thinc/config.py", line 1, in <module>
import catalogue
File "/Users/akshay_sehgal/miniconda3/envs/demo_ui/lib/python3.10/site-packages/catalogue/__init__.py", line 20, in <module>
AVAILABLE_ENTRY_POINTS = importlib_metadata.entry_points() # type: ignore
File "/Users/akshay_sehgal/miniconda3/envs/demo_ui/lib/python3.10/importlib/metadata/__init__.py", line 1009, in entry_points
return SelectableGroups.load(eps).select(**params)
File "/Users/akshay_sehgal/miniconda3/envs/demo_ui/lib/python3.10/importlib/metadata/__init__.py", line 459, in load
ordered = sorted(eps, key=by_group)
File "/Users/akshay_sehgal/miniconda3/envs/demo_ui/lib/python3.10/importlib/metadata/__init__.py", line 1006, in <genexpr>
eps = itertools.chain.from_iterable(
File "/Users/akshay_sehgal/miniconda3/envs/demo_ui/lib/python3.10/importlib/metadata/_itertools.py", line 16, in unique_everseen
k = key(element)
# What have I tried?
First thing I tried was to isolate the spacy code and run it in a notebook in the specific environment, which worked without any issue.
Next, after researching SO (this answer) and the github issues, I found that importlib.metadata could be the potential culprit and therefore I downgraded this using the following code, but it didn't fix anything.
pip uninstall importlib-metadata
pip install importlib-metadata==3.4.0
I removed the complete environment, and setup the whole thing again, from scratch, following the same steps I used the first time (just in case I had made some mistake during its setup). But still the same error.
Final option I would be left with, is to containerize the spacy processing as an API, and then call it via the streamlit app using requests
I would be happy to share the requirements.txt if needed, but I will have to figure out how to upload it somewhere via my office pc. Do let me know if that is required and I will find a way.
Would appreciate any help in solving this issue!
Upgrade importlib-metadata to importlib-metadata>=4.3.0 to avoid this particular error.
There can be complicated interactions between the built-in importlib.metadata and the additional importlib_metadata package, and you need a newer version of importlib-metadata to get some of the updates/fixes related to this.
With python 3.10 and importlib-metadata==3.4.0, you can see this error with the following example (spacy and streamlit are not required):
import importlib_metadata
import importlib.metadata
importlib.metadata.entry_points()

How to read history in ptpython console?

I've been trying to figure out how to get save and read in the history of my Python commands in a ptpython console, but haven't been able to do so. All of my efforts have so far been variations of this answer. However, I still am not able to read in my history.
I would simply like to be able to press the ↑ and ↓ arrows to go through my Python commands from a previous console session (not the current console session that I'm in). Here's what I currently have in my $PYTHONSTARTUP file:
# Add auto-completion and a stored history file of commands to your Python
# interactive interpreter. Requires Python 2.0+, readline. Autocomplete is
# bound to the Esc key by default (you can change it - see readline docs).
#
# Store the file in ~/.pystartup, and set an environment variable to point
# to it: "export PYTHONSTARTUP=/home/user/.pystartup" in bash.
#
# Note that PYTHONSTARTUP does *not* expand "~", so you have to put in the
# full path to your home directory.
import atexit
import os
import readline
import rlcompleter
import sys
try:
from ptpython.repl import embed
except ImportError:
print('ptpython is not available: falling back to standard prompt')
else:
sys.exit(embed(globals(), locals()))
historyPath = os.path.expanduser("~/.ptpython/history")
def save_history(historyPath=historyPath):
import readline
readline.write_history_file(historyPath)
if os.path.exists(historyPath):
readline.read_history_file(historyPath)
atexit.register(save_history)
readline.parse_and_bind('tab: complete')
del os, atexit, readline, rlcompleter, save_history, historyPath
And my $PYTHONSTARTUP variable is:
$ echo $PYTHONSTARTUP
/Users/[redacted]/.pystartup
I'm on Python 3.7.3, macOS 10.14.6, and ptpython 2.0.4.
Thanks
If you check source code for embed then you see option history_filename=
embed(globals(), locals(), history_filename=historyPath)
import os
try:
from ptpython.repl import embed
except ImportError:
print('ptpython is not available: falling back to standard prompt')
else:
history_path = os.path.expanduser("~/.ptpython/history")
embed(globals(), locals(), history_filename=history_path)
BTW: If folder ~/.ptpython doesn't exists then you will have to create it before run code.
EDIT (2022):
import os
try:
from ptpython.repl import embed
except ImportError:
print('ptpython is not available: falling back to standard prompt')
else:
history_dir = os.path.expanduser("~/.ptpython")
history_path = os.path.join(history_dir, "history")
if not os.path.exists(history_path):
os.makedirs(history_dir, exist_ok=True) # create folder if not exist
open(history_path, 'a').close() # create empty file
embed(globals(), locals(), history_filename=history_path)

How to run OpenAI Gym .render() over a server

I am running a python 2.7 script on a p2.xlarge AWS server through Jupyter (Ubuntu 14.04). I would like to be able to render my simulations.
Minimal working example
import gym
env = gym.make('CartPole-v0')
env.reset()
env.render()
env.render() makes (among other things) the following errors:
...
HINT: make sure you have OpenGL install. On Ubuntu, you can run
'apt-get install python-opengl'. If you're running on a server,
you may need a virtual frame buffer; something like this should work:
'xvfb-run -s \"-screen 0 1400x900x24\" python <your_script.py>'")
...
NoSuchDisplayException: Cannot connect to "None"
I would like to some how be able to see the simulations. It would be ideal if I could get it inline, but any display method would be nice.
Edit: This is only an issue with some environments, like classic control.
Update I
Inspired by this I tried the following, instead of the xvfb-run -s \"-screen 0 1400x900x24\" python <your_script.py> (which I couldn't get to work).
xvfb-run -a jupyter notebook
Running the original script I now get instead
GLXInfoException: pyglet requires an X server with GLX
Update II
Issue #154 seems relevant. I tried disabling the pop-up, and directly creating the RGB colors
import gym
env = gym.make('CartPole-v0')
env.reset()
img = env.render(mode='rgb_array', close=True)
print(type(img)) # <--- <type 'NoneType'>
img = env.render(mode='rgb_array', close=False) # <--- ERROR
print(type(img))
I get ImportError: cannot import name gl_info.
Update III
With inspiration from #Torxed I tried creating a video file, and then rendering it (a fully satisfying solution).
Using the code from 'Recording and uploading results'
import gym
env = gym.make('CartPole-v0')
env.monitor.start('/tmp/cartpole-experiment-1', force=True)
observation = env.reset()
for t in range(100):
# env.render()
print(observation)
action = env.action_space.sample()
observation, reward, done, info = env.step(action)
if done:
print("Episode finished after {} timesteps".format(t+1))
break
env.monitor.close()
I tried following your suggestions, but got ImportError: cannot import name gl_info from when running env.monitor.start(....
From my understanding the problem is that OpenAI uses pyglet, and pyglet 'needs' a screen in order to compute the RGB colors of the image that is to be rendered. It is therefore necessary to trick python to think that there is a monitor connected
Update IV
FYI there are solutions online using bumblebee that seem to work. This should work if you have control over the server, but since AWS run in a VM I don't think you can use this.
Update V
Just if you have this problem, and don't know what to do (like me) the state of most environments are simple enough that you can create your own rendering mechanism. Not very satisfying, but.. you know.
Got a simple solution working:
If on a linux server, open jupyter with
$ xvfb-run -s "-screen 0 1400x900x24" jupyter notebook
In Jupyter
import matplotlib.pyplot as plt
%matplotlib inline
from IPython import display
After each step
def show_state(env, step=0, info=""):
plt.figure(3)
plt.clf()
plt.imshow(env.render(mode='rgb_array'))
plt.title("%s | Step: %d %s" % (env._spec.id,step, info))
plt.axis('off')
display.clear_output(wait=True)
display.display(plt.gcf())
Note: if your environment is not unwrapped, pass env.env to show_state.
This GitHub issue gave an answer that worked great for me. It's nice because it doesn't require any additional dependencies (I assume you already have matplotlib) or configuration of the server.
Just run, e.g.:
import gym
import matplotlib.pyplot as plt
%matplotlib inline
env = gym.make('Breakout-v0') # insert your favorite environment
render = lambda : plt.imshow(env.render(mode='rgb_array'))
env.reset()
render()
Using mode='rgb_array' gives you back a numpy.ndarray with the RGB values for each position, and matplotlib's imshow (or other methods) displays these nicely.
Note that if you're rendering multiple times in the same cell, this solution will plot a separate image each time. This is probably not what you want. I'll try to update this if I figure out a good workaround for that.
Update to render multiple times in one cell
Based on this StackOverflow answer, here's a working snippet (note that there may be more efficient ways to do this with an interactive plot; this way seems a little laggy on my machine):
import gym
from IPython import display
import matplotlib.pyplot as plt
%matplotlib inline
env = gym.make('Breakout-v0')
env.reset()
for _ in range(100):
plt.imshow(env.render(mode='rgb_array'))
display.display(plt.gcf())
display.clear_output(wait=True)
action = env.action_space.sample()
env.step(action)
Update to increase efficiency
On my machine, this was about 3x faster. The difference is that instead of calling imshow each time we render, we just change the RGB data on the original plot.
import gym
from IPython import display
import matplotlib
import matplotlib.pyplot as plt
%matplotlib inline
env = gym.make('Breakout-v0')
env.reset()
img = plt.imshow(env.render(mode='rgb_array')) # only call this once
for _ in range(100):
img.set_data(env.render(mode='rgb_array')) # just update the data
display.display(plt.gcf())
display.clear_output(wait=True)
action = env.action_space.sample()
env.step(action)
I think we should just capture renders as video by using OpenAI Gym wrappers.Monitor
and then display it within the Notebook.
Example:
Dependencies
!apt install python-opengl
!apt install ffmpeg
!apt install xvfb
!pip3 install pyvirtualdisplay
# Virtual display
from pyvirtualdisplay import Display
virtual_display = Display(visible=0, size=(1400, 900))
virtual_display.start()
Capture as video
import gym
from gym import wrappers
env = gym.make("SpaceInvaders-v0")
env = wrappers.Monitor(env, "/tmp/SpaceInvaders-v0")
for episode in range(2):
observation = env.reset()
step = 0
total_reward = 0
while True:
step += 1
env.render()
action = env.action_space.sample()
observation, reward, done, info = env.step(action)
total_reward += reward
if done:
print("Episode: {0},\tSteps: {1},\tscore: {2}"
.format(episode, step, total_reward)
)
break
env.close()
Display within Notebook
import os
import io
import base64
from IPython.display import display, HTML
def ipython_show_video(path):
"""Show a video at `path` within IPython Notebook
"""
if not os.path.isfile(path):
raise NameError("Cannot access: {}".format(path))
video = io.open(path, 'r+b').read()
encoded = base64.b64encode(video)
display(HTML(
data="""
<video alt="test" controls>
<source src="data:video/mp4;base64,{0}" type="video/mp4" />
</video>
""".format(encoded.decode('ascii'))
))
ipython_show_video("/tmp/SpaceInvaders-v0/openaigym.video.4.10822.video000000.mp4")
I hope it helps. ;)
I managed to run and render openai/gym (even with mujoco) remotely on a headless server.
# Install and configure X window with virtual screen
sudo apt-get install xserver-xorg libglu1-mesa-dev freeglut3-dev mesa-common-dev libxmu-dev libxi-dev
# Configure the nvidia-x
sudo nvidia-xconfig -a --use-display-device=None --virtual=1280x1024
# Run the virtual screen in the background (:0)
sudo /usr/bin/X :0 &
# We only need to setup the virtual screen once
# Run the program with vitural screen
DISPLAY=:0 <program>
# If you dont want to type `DISPLAY=:0` everytime
export DISPLAY=:0
Usage:
DISPLAY=:0 ipython2
Example:
import gym
env = gym.make('Ant-v1')
arr = env.render(mode='rgb_array')
print(arr.shape)
# plot or save wherever you want
# plt.imshow(arr) or scipy.misc.imsave('sample.png', arr)
There's also this solution using pyvirtualdisplay (an Xvfb wrapper). One thing I like about this solution is you can launch it from inside your script, instead of having to wrap it at launch:
from pyvirtualdisplay import Display
display = Display(visible=0, size=(1400, 900))
display.start()
I ran into this myself.
Using xvfb as X-server somehow clashes with the Nvidia drivers.
But finally this post pointed me into the right direction.
Xvfb works without any problems if you install the Nvidia driver with the -no-opengl-files option and CUDA with --no-opengl-libs option.
If you know this, it should work. But as it took me quite some time till I figured this out and it seems like I'm not the only one running into problems with xvfb and the nvidia drivers.
I wrote down all necessary steps to set everything up on an AWS EC2 instance with Ubuntu 16.04 LTS here.
Referencing my other answer here: Display OpenAI gym in Jupyter notebook only
I made a quick working example here which you could fork: https://kyso.io/eoin/openai-gym-jupyter with two examples of rendering in Jupyter - one as an mp4, and another as a realtime gif.
The .mp4 example is quite simple.
import gym
from gym import wrappers
env = gym.make('SpaceInvaders-v0')
env = wrappers.Monitor(env, "./gym-results", force=True)
env.reset()
for _ in range(1000):
action = env.action_space.sample()
observation, reward, done, info = env.step(action)
if done: break
env.close()
Then in a new cell Jupyter cell, or download it from the server onto some place where you can view the video.
import io
import base64
from IPython.display import HTML
video = io.open('./gym-results/openaigym.video.%s.video000000.mp4' % env.file_infix, 'r+b').read()
encoded = base64.b64encode(video)
HTML(data='''
<video width="360" height="auto" alt="test" controls><source src="data:video/mp4;base64,{0}" type="video/mp4" /></video>'''
.format(encoded.decode('ascii')))
If your on a server with public access you could run python -m http.server in the gym-results folder and just watch the videos there.
I encountered the same problem and stumbled upon the answers here. Mixing them helped me to solve the problem.
Here's a step by step solution:
Install the following:
apt-get install -y python-opengl xvfb
Start your jupyter notebook via the following command:
xvfb-run -s "-screen 0 1400x900x24" jupyter notebook
Inside the notebook:
import gym
import matplotlib.pyplot as plt
%matplotlib inline
env = gym.make('MountainCar-v0') # insert your favorite environment
env.reset()
plt.imshow(env.render(mode='rgb_array')
Now you can put the same thing in a loop to render it multiple times.
from IPython import display
for _ in range(100):
plt.imshow(env.render(mode='rgb_array'))
display.display(plt.gcf())
display.clear_output(wait=True)
action = env.action_space.sample()
env.step(action)
Hope this works for anyone else still facing an issue. Thanks to Andrews and Nathan for their answers.
I avoided the issues with using matplotlib by simply using PIL, Python Image Library:
import gym, PIL
env = gym.make('SpaceInvaders-v0')
array = env.reset()
PIL.Image.fromarray(env.render(mode='rgb_array'))
I found that I didn't need to set the XV frame buffer.
I was looking for a solution that works in Colaboratory and ended up with this
from IPython import display
import numpy as np
import time
import gym
env = gym.make('SpaceInvaders-v0')
env.reset()
import PIL.Image
import io
def showarray(a, fmt='png'):
a = np.uint8(a)
f = io.BytesIO()
ima = PIL.Image.fromarray(a).save(f, fmt)
return f.getvalue()
imagehandle = display.display(display.Image(data=showarray(env.render(mode='rgb_array')), width=450), display_id='gymscr')
while True:
time.sleep(0.01)
env.step(env.action_space.sample()) # take a random action
display.update_display(display.Image(data=showarray(env.render(mode='rgb_array')), width=450), display_id='gymscr')
EDIT 1:
You could use xvfbwrapper for the Cartpole environment.
from IPython import display
from xvfbwrapper import Xvfb
import numpy as np
import time
import pyglet
import gym
import PIL.Image
import io
vdisplay = Xvfb(width=1280, height=740)
vdisplay.start()
env = gym.make('CartPole-v0')
env.reset()
def showarray(a, fmt='png'):
a = np.uint8(a)
f = io.BytesIO()
ima = PIL.Image.fromarray(a).save(f, fmt)
return f.getvalue()
imagehandle = display.display(display.Image(data=showarray(env.render(mode='rgb_array')), width=450), display_id='gymscr')
for _ in range(1000):
time.sleep(0.01)
observation, reward, done, info = env.step(env.action_space.sample()) # take a random action
display.update_display(display.Image(data=showarray(env.render(mode='rgb_array')), width=450), display_id='gymscr')
vdisplay.stop()
If you're working with standard Jupyter, there's a better solution though. You can use the CommManager to send messages with updated Data URLs to your HTML output.
IPython Inline Screen Example
In Colab the CommManager is not available. The more restrictive output module has a method called eval_js() which seems to be kind of slow.
I had the same problem and I_like_foxes solution to reinstall nvidia drivers with no opengl fixed things. Here are the commands I used for Ubuntu 16.04 and GTX 1080ti
https://gist.github.com/8enmann/931ec2a9dc45fde871d2139a7d1f2d78
This might be a complete workaround, but I used a docker image with a desktop environment, and it works great. The docker image is at https://hub.docker.com/r/dorowu/ubuntu-desktop-lxde-vnc/
The command to run is
docker run -p 6080:80 dorowu/ubuntu-desktop-lxde-vnc
Then browse http://127.0.0.1:6080/ to access the Ubuntu desktop.
Below are a gif showing it the Mario bros gym environment running and being rendered. As you can see, it is fairly responsive and smooth.
In my IPython environment, Andrew Schreiber's solution can't plot image smoothly. The following is my solution:
If on a linux server, open jupyter with
$ xvfb-run -s "-screen 0 1400x900x24" jupyter notebook
In Jupyter
import matplotlib.pyplot as plt
%matplotlib inline
%matplotlib notebook
from IPython import display
Display iteration:
done = False
obs = env.reset()
fig = plt.figure()
ax = fig.add_subplot(111)
plt.ion()
fig.show()
fig.canvas.draw()
while not done:
# action = pi.act(True, obs)[0] # pi means a policy which produces an action, if you have
# obs, reward, done, info = env.step(action) # do action, if you have
env_rnd = env.render(mode='rgb_array')
ax.clear()
ax.imshow(env_rnd)
fig.canvas.draw()
time.sleep(0.01)
I created this mini-package which allows you to render your environment onto a browser by just adding one line to your code.
Put your code in a function and replace your normal env.render() with yield env.render(mode='rgb_array'). Encapsulate this function with the render_browser decorator.
import gym
from render_browser import render_browser
#render_browser
def test_policy(policy):
# Your function/code here.
env = gym.make('Breakout-v0')
obs = env.reset()
while True:
yield env.render(mode='rgb_array')
# ... run policy ...
obs, rew, _, _ = env.step(action)
test_policy(policy)
When you visit your_ip:5000 on your browser, test_policy() will be called and you'll be able to see the rendered environment on your browser window.

Python or LibreOffice Save xlsx file encrypted with password

I am trying to save an Excel file encrypted with password. I have tried following the guide on https://help.libreoffice.org/Common/Protecting_Content_in - and works perfectly. However, this is in the GUI, but I am looking for a solution using the command line interface in headless mode.
I have looked at the man libreoffice, but I could not find anything in there.
Likewise I have looked at the documentation of the Python 3 library openpyxl, but I did not find anything useful there either.
Is it possible to save an Excel 2007+ file encrypted with a password on Ubuntu 14.04/16.04 using the command line (or Python library) that do not require any user interaction or X session?
There is solution using Jython and Apache POI. If you want like to use it from CPython/PyPy, you can use subprocess module to call external Jython script.
I assume that you have Java JRE/JDK installed
Create non-encrypted xlsx file with Excel/Calc or use xlsxwriter or openpyxl and save it as test1.xlsx
Download standalone Jython
Download Apache POI
Extract Apache POI in same dir where is standalone Jython jar
Save following Jython script as encrypt.py:
import os
import sys
from java.io import BufferedInputStream
from java.io import FileInputStream
from java.io import FileOutputStream
from java.io import File
from java.io import IOException
from org.apache.poi.poifs.crypt import EncryptionInfo, EncryptionMode
from org.apache.poi.poifs.crypt import CipherAlgorithm, HashAlgorithm
from org.apache.poi.poifs.crypt.agile import AgileEncryptionInfoBuilder
from org.apache.poi.openxml4j.opc import OPCPackage, PackageAccess
from org.apache.poi.poifs.filesystem import POIFSFileSystem
from org.apache.poi.ss.usermodel import WorkbookFactory
def encrypt_xlsx(in_fname, out_fname, password):
# read
in_f = File(in_fname)
in_wb = WorkbookFactory.create(in_f, password)
in_fis = FileInputStream(in_fname)
in_wb.close()
# encryption
out_poi_fs = POIFSFileSystem()
info = EncryptionInfo(EncryptionMode.agile)
enc = info.getEncryptor()
enc.confirmPassword(password)
opc = OPCPackage.open(in_f, PackageAccess.READ_WRITE)
out_os = enc.getDataStream(out_poi_fs)
opc.save(out_os)
opc.close()
# write
out_fos = FileOutputStream(out_fname)
out_poi_fs.writeFilesystem(out_fos)
out_fos.close()
if __name__ == '__main__':
in_fname = sys.argv[1]
out_fname = sys.argv[2]
password = sys.argv[3]
encrypt_xlsx(in_fname, out_fname, password)
Call it from console:
java -cp "jython-standalone-2.7.0.jar:poi-3.15/lib/commons-codec-1.10.jar:poi-3.15/lib/commons-collections4-4.1.jar:poi-3.15/poi-3.15.jar:poi-3.15/poi-ooxml-3.15.jar:poi-3.15/poi-ooxml-schemas-3.15.jar:poi-3.15/ooxml-lib/curvesapi-1.04.jar:poi-3.15/ooxml-lib/xmlbeans-2.6.0.jar" org.python.util.jython -B encrypt.py test1.xlsx test1enc.xlsx 12345678
Where:
encrypt.py - name of script
test1.xlsx - input filename
test1enc.xlsx - output filename
12345678 - password
Final encrypted xslx should be in test1enc.xlsx.

markdown module has no inlinepatterns attribute

I'm writing a markdown extension, but when I run it from the python command line:
>>> import markdown
>>> markdown.markdown('foo --deleted-- bar', ['myextension'])
I get the following error:
AttributeError: 'module' object has no attribute 'inlinepatterns'
On this line:
md.inlinepatterns.add('del', del_tag, '>not_strong')
I've updated markdown to 2.3.1 and I'm running it in Python 2.6. The interpreter appears to be finding my mdx_myextension.py file as the tracebook reflects that it has registered the extension.
Seems like you are referencing the attribute by the wrong name.
Use inlinePatterns instead of inlinepatterns.
See Python Markdown documentaion - extensions api
You may need to import markdown.inlinepatterns
Markdown extension code
import markdown
import markdown.inlinepatterns
DEL_RE = r'(--)(.*?)--'
class MyExtension(markdown.Extension):
def extendMarkdown(self, md, md_globals):
# Create the del pattern
del_tag = markdown.inlinepatterns.SimpleTagPattern(DEL_RE, 'del')
# Insert del pattern into markdown parser
md.inlinePatterns.add('del', del_tag, '>not_strong')
def makeExtension(configs=None):
return MyExtension(configs=configs)
Example code
import markdown
import mdx_myextension
if __name__ == "__main__":
print markdown.markdown('foo --deleted-- bar', ['myextension'])
Reference: http://achinghead.com/python-markdown-adding-insert-delete.html

Categories

Resources