Running Jupyter kernel and notebook server on different machines - python

I'm trying to run an iPython/ Jupyter kernel and the notebook server on two different Windows machines on a LAN.
From most of the links that I found on the internet, they offer advice on how we can access a remote kernel + server setup from a web browser, but no information on how to separate the kernel and the notebook server themselves.
Ideally, I'd like the code to remain on one machine, and the execution to happen on the other.
Is there a way that I could do this?

I ended up using this demo which pretty much did this job for me.

This can be done, though it is a bit fiddly, and I do not believe that anyone has done it before on Windows. Jupyter applications use a class called a KernelManager to start/stop kernels. KernelManager provides an API that is responsible for launching kernel processes, and collecting the network information necessary to connect to them. There are two implementations of remote kernels that I know of:
remotekernel
rk
Both of these use ssh to launch the remote kernels, and assume unix systems. I don't know how to launch processes remotely on Windows, but presumably you could follow the example of these two projects to do the same thing in a way that works on Windows.

Related

Cannot connect to remote jupyter server from VS Code

I need some advice. So I am a big fan of VS Code and I always use its embedded notebooks. I built a remote Jupyter Server on Oracle Cloud hoping I could connect from vscode. To create the server I based on this article, but migrating as advised by Jupyter to JupyterServer. I've also used miniconda isntead of venv.
The server seems to work correctly, I can access it from my browser and in my Windows Terminal SSH, open Jupyter Lab, create and run noteboooks in it, etc. The problem is when I try to use it with VS Code, when I try to specify de Jupyter Server for connections, it allows me to do it, it even prompts me that it is an insecure connection (I use self signed ssl certificate), and it does mark Jupyter Server: Remote BUT, when I try to select my interpreter, change my kernel, it only shows my local conda envs. if I run !hostname it shows me my local hostname, not my remotes, it isn't really connecting or using the remote Jupyter server to run the cells.
I've looked around and can`t find a way to make it work, I really want it to work with VS Code, any help?
This has no impact on the actual use of jupyter. Your confusion is actually a misunderstanding caused by the definition of names.
As stated in the official document, when you connect to a remote server, everything runs in the server ather than the local computer.
At present, there is an issue for changing the naming on GitHub, which you can read in detail.

Running Python scripts on an Ubuntu machine disables the network port

One of the Ubuntu machines I manage is having an issue where it completely disables the network port every time I run a Python script on it. It does not matter what the script is, after about 5 minutes of execution, the network show as unreachable. I have tried disabling and re-enabling the network via the terminal but this does not bring the port back online. Even doing a normal reboot does nothing, I have to physically unplug the machine to get it to come back up. Has anyone had this problem before?
Edit: Linux version 4.15.0-99-generic (gcc version 7.5.0). The network is a domain with this computer hooking up via a dynamic IP linked to a static IP router. This is only one of about 50 Linux machines we (college IT staff) manage and this is the only one that has ever done anything like this. Other computers in the same room with the exact same network setup run scripts perfectly.
See https://docs.python.org/3.8/library/socket.html#example and read the part where it talks about why you need to use SO_REUSEADDR.

Run Spyder /Python on remote server

So there are variants of this question - but none quite hit the nail on the head.
I want to run spyder and do interactive analysis on a server. I have two servers , neither have spyder. They both have python (linux server) but I dont have sudo rights to install packages I need.
In short the use case is: open spyder on local machine. Do something (need help here) to use the servers computation power , and then return results to local machine.
Update:
I have updated python with my packages on one server. Now to figure out the kernel name and link to spyder.
Leaving previous version of question up, as that is still useful.
The docker process is a little intimidating as does paramiko. What are my options?
(Spyder maintainer here) What you need to do is to create an Spyder kernel in your remote server and connect through SSH to it. That's the only facility we provide to do what you want.
You can find the precise instructions to do that in our docs.
I did a long search for something like this in my past job, when we wanted to quickly iterate on code which had to run across many workers in a cluster. All the commercial and open source task-queue projects that I found were based on running fixed code with arbitrary inputs, rather than running arbitrary code.
I'd also be interested to see if there's something out there that I missed. But in my case, I ended up building my own solution (unfortunately not open source).
My solution was:
1) I made a Redis queue where each task consisted of a zip file with a bash setup script (for pip installs, etc), a "payload" Python script to run, and a pickle file with input data.
2) The "payload" Python script would read in the pickle file or other files contained in the zip file. It would output a file named output.zip.
3) The task worker was a Python script (running on the remote machine, listening to the Redis queue) that would would unzip the file, run the bash setup script, then run the Python script. When the script exited, the worker would upload output.zip.
There were various optimizations, like the worker wouldn't run the same bash setup script twice in a row (it remembered the SHA1 hash of the most recent setup script). So, anyway, in the worst case you could do that. It was a week or two of work to setup.
Edit:
A second (much more manual) option, if you just need to run on one remote machine, is to use sshfs to mount the remote filesystem locally, so you can quickly edit the files in Spyder. Then keep an ssh window open to the remote machine, and run Python from the command line to test-run the scripts on that machine. (That's my standard setup for developing Raspberry Pi programs.)

How to run a python script successfully with a debian system on the VirtualBox?

I have got a Windows7 system, and I installed on it a Virtual Box 5.1.26.
On this virtual box, I installed a Debian64 - Linux server. (I think I configured it correctly, it is getting enough memory).
When I want run a Python script on it (which is a web-scraping script, it process around 1000 pages and take it into database), i get always the same error message after a few minutes :
Unable to allocate and lock memory. The virtual machine will be paused. Please close applications to free up memory or close the VM.
Or something error message with : run out of time (when it want to load a website)
In the windows7 system my script is working without any problem, so I am a little bit confused now, what is the problem here?
First check the parameters of your virtual machine you might have given it much more RAM or processors than you have (or not enough).
If this is not the case close everything in the VM and only start the script.
These errors generally say that you don't have resources to perform the operation.
Check if your syntax is ok and if you are using the same version of python on both systems.
Note that the VM is a guest system and can't have as much resources as your main OS because the main Os will die in some circumstances.

Virtual Network using Vmware

I have a vmware workstation pro 12 and I can open multiple virtual machines at a time. All wanted is to connect them in a virtual network. This will allow me to create a server(using python sockets) in a virtual machines and other VMs act as clients. Is my idea possible? if possible How can I do it.
Im not sure if this help but your question doesnt really help either.
So the last time I used vmware was for virtual machine. I think it was called wmware workstation 12. I used the free version which lets you use it for noncommercial use. If you are using that then this most likely applies.
So because its not the pro or commercial version you can only open one virtual machine at a time. But from your question seems like your using python. Not sure what that means. But what i am trying to say is if its the free version then you may only be able to open one virtual machine at a time.
This maybe the problem your having.
I hoped this helps, if not you then someone else.
EDIT
Here is a few youtube video i have found that will help to make a virtual network. You need to make a host-only network. May wish to turn on dhcp. Once your created the virtual network. All the vms need to use the same virtual network. Now that your vms are on the same network and are able to communicate with each other hopefully your python script should work. Im not sure how to use pyhton. Otherwise would have provided code to open a simple socket and test it from client side. Anyway im sure you could your script correctly and it should work now. You may need to use ipconfig (windows cmd)/ifconfig (unix terminal) to find the ip address of the server machine.
https://www.youtube.com/watch?v=8VPkRC0mKF4
https://youtu.be/vKoFSmy3agM?t=131
Here is link to simple python server
https://www.tutorialspoint.com/python/python_networking.htm
the host variable in the client code should be the ip of the server and not gethostname. so use ifconfig/ipconfig on server to find the server ip.
👍

Categories

Resources