How do I embed an Ipython Notebook in an iframe (new) - python

I have successfully achieved this using the method documented at Run IPython Notebook in Iframe from another Domain . However, this required editing the user config file. I was really hoping to be able to set this up via the command-line instead (for reasons).
http://ipython.org/ipython-doc/1/config/overview.html indicates that configuration via the command line is possible. However, all the examples are for simple true/false value assignment. To set the server up to allow embedding, it is necessary to set a value inside a dictionary. I can't work out how to pass a dictionary in through the command-line.
Another acceptable option would be a configuration overrides file.
Some people will wonder -- why all this trouble!?!
First of all, this isn't for production. I'm trying to support non-developers by writing a web-based application which integrates Ipython notebooks within it using iframes. Despite being on the same machine, it appears that the different port number used is enough to mean that I can't do simple iframe embedding without setting the x-frame insecurity bit.
Being able to do this via the command line lets me set the behaviour in the launch script rather than having to bundle a special configuration file inside my app, and also write an installer.
I really hope I've make the question clear enough! Thanks for any and all suggestions and help!

Looking over the IPython source for the loaders, it seems like it will execute whatever python code you put on the right hand side. I've not tested it, but based on the link you provided, you can probably pass something like
--NotebookApp.webapp_settings=dict('headers'=dict('X-Frame-Options'='ALLOW-FROM https://example.com/'))

Related

Examples of user-entered code that would cause an issue

Let's say the user has installed a python interpreter on their machine/browser, for example, using something like https://github.com/iodide-project/pyodide. I understand not allowing someone to enter in arbitrary code when they don't own the resources, for example doing something like:
exec('while 1: os.fork()')
However, if the user is executing the code on their own machine, is there anything wrong with allowing them to run arbitrary evals and execs, and just telling them "Please use at your own risk"? The use case is we give the user an environment to work with a spreadsheet, and they can enter in formulas using python, and we just 'pass-through' the entered string (in the spreadsheet cell) to their python environment.
If you are OK with the user being able to run arbitrary Javascript code client side (which is true for all websites), it should be also OK for them to run arbitrary code with Pyodide. Both are sandboxed by the browser.
For instance, they won't be able to interact with their actual file system, nor generally make any system calls that don't pass through the Webassembly VM. See https://webassembly.org/docs/security/ for more details.

How best to enable non-programmers to run a Python program

I have written a Python script which models an academic problem which I wish to publish. I will put the source on Github and some academics that just happen to know Python may get my source and play with it themselves. However there are probably more academics that may be interested in the model but that are not python programmers and I would like them to be able to run my model too. Even though they are not programmers they could at least try out editing the values of some of the parameters to see how that affects the results. So now my question is how could I arrange for a non-python programmer to run a Python program as easily (for them) as possible. I would guess that my options may be...
google colab
an online python compiler like this one
compiling the program into an exe (and letting the user set parameters via a config file)
something else?
So now a couple of complications that makes my problem trickier.
The output of the program is graphical and uses matplotlib. As I understand it, the utilities that turn python scripts into exe files struggle or fail altogether when it comes to matplotlib.
The source is split into two separate files, one small neat file which contains the model and the user might like to have a good look at it and get the gist of it even if they're not really a python programmer. And a separate large ugly file which just handles the graphics - an academic would have no interest in this and I'd like to spare them the gory details.
EDIT: I did ask a related question here - but that was all about programmers that won't mind doing things like installing python and using pip... this question is in relation to non-programmers who would not be comfortable doing things like that.
Colab can handle the 2 problems, but you may need to adapt some code.
Matplotlib interface: Colab can display plots just fine. But you may want user to interact with slider, checkbox, dropdown menu. Then, you need to use Colab's own Form UI, or pywidgets. See an example here
2 separate python files: you can convert one of them to a notebook. Then import the other. Or you can create a new notebook that import both files. Here's an example.

when using Watchman's watch-make I want to access the name of the changed files

I am writing a watchman command with watchman-make and I'm at a loss when trying to access exactly what was changed in the directory. I want to run my upload.py script and inside the script I would like to access filenames of newly created files in /var/spool/cups-pdf/ANONYMOUS .
so far I have
$ watchman-make -p '/var/spool/cups-pdf/ANONYMOUS' -—run 'python /home/pi/upload.py'
I'd like to add another argument to python upload.py so I can have an exact filepath to the newly created file so that I can send the new file over to my database in upload.py,
I've been looking at the docs of watchman and the closest thing I can think to use is a trigger object. Please help!
Solution with watchman-wait:
Assuming project layout like this:
/posts/_SUBDIR_WITH_POST_NAME_/index.md
/Scripts/convert.sh
And the shell script like this:
#!/bin/bash
# File: convert.sh
SrcDirPath=$(cd "$(dirname "$0")/../"; pwd)
cd "$SrcDirPath"
echo "Converting: $SrcDirPath/$1"
Then we can launch watchman-wait like this:
watchman-wait . --max-events 0 -p 'posts/**/*.md' | while read line; do ./Scripts/convert.sh $line; done
When we changing file /posts/_SUBDIR_WITH_POST_NAME_/index.md the output will be like this:
...
Converting: /Users/.../Angular/dartweb_quickstart/posts/swift-on-android-building-toolchain/index.md
Converting: /Users/.../Angular/dartweb_quickstart/posts/swift-on-android-building-toolchain/index.md
...
watchman-make is intended to be used together with tools that will perform a follow-up query of their own to discover what they want to do as a next step. For example, running the make tool will cause make to stat the various deps to bring things up to date.
That means that your upload.py script needs to know how to do this for itself if you want to use it with watchman.
You have a couple of options, depending on how sophisticated you want things to be:
Use pywatchman to issue an ad-hoc query
If you want to be able to run upload.py whenever you want and have it figure out the right thing (just like make would do) then you can have it ask watchman directly. You can have upload.py use pywatchman (the python watchman client) to do this. pywatchman will get installed if the the watchman configure script thinks you have a working python installation. You can also pip install pywatchman. Once you have it available and in your PYTHONPATH:
import pywatchman
client = pywatchman.client()
client.query('watch-project', os.getcwd())
result = client.query('query', os.getcwd(), {
"since": "n:pi_upload",
"fields": ["name"]})
print(result["files"])
This snippet uses the since generator with a named cursor to discover the list of files that changed since the last query was issued using that same named cursor. Watchman will remember the associated clock value for you, so you don't need to complicate your script with state tracking. We're using the name pi_upload for the cursor; the name needs to be unique among the watchman clients that might use named cursors, so naming it after your tool is a good idea to avoid potential conflict.
This is probably the most direct way to extract the information you need without requiring that you make more invasive changes to your upload script.
Use pywatchman to initiate a long running subscription
This approach will transform your upload.py script so that it knows how to directly subscribe to watchman, so instead of using watchman-make you'd just directly run upload.py and it would keep running and performing the uploads. This is a bit more invasive and is a bit too much code to try and paste in here. If you're interested in this approach then I'd suggest that you take the code behind watchman-wait as a starting point. You can find it here:
https://github.com/facebook/watchman/blob/master/python/bin/watchman-wait
The key piece of this that you might want to modify is this line:
https://github.com/facebook/watchman/blob/master/python/bin/watchman-wait#L169
which is where it receives the list of files.
Why not triggers?
You could use triggers for this, but we're steering folks away from triggers because they are hard to manage. A trigger will run in the background and have its output go to the watchman log file. It can be difficult to tell if it is running, or to stop it running.
The interface is closer to the unix model and allows you to feed a list of files on stdin.
Speaking of unix, what about watchman-wait?
We also have a command that emits the list of changed files as they change. You could potentially stream the output from watchman-wait in your upload.py. This would make it have some similarities with the subscription approach but do so without directly using the pywatchman client.

PyCharm: Storing variables in memory to be able to run code from a "checkpoint"

I've been searching everywhere for an answer to this but to no avail. I want to be able to run my code and have the variables stored in memory so that I can perhaps set a "checkpoint" which I can run from in the future. The reason is that I have a fairly expensive function that takes some time to compute (as well as user input) and it would be nice if I didn't have to wait for it to finish every time I run after I change something downstream.
I'm sure a feature like this exists in PyCharm but I have no idea what it's called and the documentation isn't very clear to me at my level of experience. It would save me a lot of time if someone could point me in the right direction.
Turns out this is (more or less) possible by using the PyCharm console. I guess I should have realized this earlier because it seems so simple now (though I've never used a console in my life so I guess I should learn).
Anyway, the console lets you run blocks of your code presuming the required variables, functions, libraries, etc... have been specified beforehand. You can actually highlight a block of your code in the PyCharm editor, right click and select "Run in console" to execute it.
This feature is not implement in Pycharm (see pycharm forum) but seems implemented in Spyder.

Python fabric put statistics

When I put a file on a remote server (using put()), is there anyway I can see the upload information or statistics printed to the stdout file descriptor?
There's no such way according to the documentation. You could however try the project tools.
There's also the option to play with fabric's local function, but of course breaks the whole host concept.
There's also no way to make fabric more verbose than the default (except for debugging). This makes sense because fabric doesn't really work with terminal escape keys to delete lines again. Displaying statistics would print way to many lines. This would actually be a nice feature - detecting line deletions within fabric and applying them (just throwing the idea out for a potential pull request).

Categories

Resources