It's the first time that I am trying to use google credits, so I apologize if it's a basic question. I am trying to see how to connect google credits into google colab by this site https://medium.com/#senthilnathangautham/colab-gcp-compute-how-to-link-them-together-98747e8d940e (you can open it by creating a new incognito window).
I am stucked in step 3 because I can't see any SSH in my google cloud. Also the numbers after the -L are fixed? If not, how can I found them?
gcloud compute ssh colab-backend --zone=europe-west4-a -L 8081:locahost:8081
EDIT: I am trying to run the above line of code in Google Cloud SDK Shell, but I have this error.
Also, I can't type in the terminal jupiter notebook. If I run the above code in the python 3 jupyter notebook I have this strange error.
Actually, I had a very silly basic mistake. First of all your server should be linux and then write the following code in the windows command:
cloud compute ssh colab-backend --zone=europe-west4-a -L 8080:locahost:8081, change it into the zone of your project and port
type
jupyter notebook --NotebookApp.allow_origin='https://colab.research.google.com' --port=8888 --NotebookApp.port_retries=0 --no-browser
copy past the link into 'local runtime' in your google colab.
Related
I am trying to host a Jupiter notebook server from the AWS cloud yet I have followed the following steps yet I cannot seem to access the notebook from my local browser. Can some one please tell me what is wrong?
https://docs.aws.amazon.com/dlami/latest/devguide/setup-jupyter.html
https://towardsdatascience.com/installing-pytorch-on-apple-m1-chip-with-gpu-acceleration-3351dc44d67c
https://docs.anaconda.com/anaconda/user-guide/tasks/remote-jupyter-notebook/
I have followed the tutorial exactly however I just cannot seem to get pass the security for some reason.
I wonder what does this mean? I cannot seem to fix it
I'm on a Windows 10 machine. I have GPU running on the Google Cloud Platform to train deep learning models.
Historically, I have been running Jupyter notebooks on the cloud server without problem, but recently began preferring to run Python notebooks in VS Code instead of the server based Jupyter notebooks. I'd like to train my VS Code notebooks on my GPUs but I don't have access to my google instances from VS Code, I can only run locally on my CPU.
Normally, to run a typical model, I spin up my instance on the cloud.google.com Compute Engine interface. I use the Ubuntu on the Windows Subsystem for Linux installation and I get in like this:
gcloud compute ssh --zone=$ZONE jupyter#$INSTANCE_NAME -- -L 8080:localhost:8080
I have tried installing the Cloud Code extension so far on VS Code, but as I go through the tutorials, I always sort of get stuck. One error I keep experiencing is that gcloud won't work on anything EXCEPT my Ubuntu terminal. I'd like it to work in the terminal inside VS Code.
Alternatively, I'd like to run the code . command on my Ubuntu command line so I can open VS Code from there, and that won't work. I've googled a few solutions, but they lead me to these same problems with neither gcloud not working, nor code . working.
Edit: I just tried the Google Cloud SDK installer from https://cloud.google.com/sdk/docs/quickstart-windows
and then I tried running gcloud compute ssh from the powershell from within VSCODE. This is the new error I got:
(base) PS C:\Users\user\Documents\dev\project\python> gcloud compute ssh --zone=$ZONE jupyter#$INSTANCE_NAME -- -L 8080:localhost:8080
WARNING: The PuTTY PPK SSH key file for gcloud does not exist.
WARNING: The public SSH key file for gcloud does not exist.
WARNING: The private SSH key file for gcloud does not exist.
WARNING: You do not have an SSH key for gcloud.
WARNING: SSH keygen will be executed to generate a key.
ERROR: (gcloud.compute.ssh) could not parse resource []
It still runs from Ubuntu using WSL, I logged in fine. I guess I just don't know entirely enough about how they're separated, what's shared, and what is missing, and to how to get all my command lines using the same stuff.
It seems as if your ssh key paths are configured correctly for your Ubuntu terminal but not for the VS Code one. If your account is not configured to use OS Login, with which Compute Engine stores the generated key with your user account, local SSH keys are needed. SSH keys are specific to each instance you want to access and here is where you can find them. Once you have find them you can specify their path using the --ssh-key-file flag.
Another option is to use OS Login as I have mentioned before.
Here you have another thread with a similar problem than yours.
I am leading a team of analysts and want to introduce them to Jupyter Notebook as a window into Python programming.
We have Anaconda downloaded and installed on our Linux server. I've asked our IT to help set it up to run on Google Chrome and they have been able to only provide the following steps:
source /R_Data/anaconda3/etc/profile.d/conda.sh
this kicks off Anaconda on the server, must run in PUTTY. We stored the installation in the same location as RStudio hence the R_Data in the filepath.
/R_Data/anaconda3/bin/jupyter-notebook --ip 0.0.0.0 --port 8889
This sets up the port 8889 with a token generated each time from scratch. We then need to grab the token id and paste into Chrome with the full URL per step 3
http://localhost:8889/?token=ea97e502a7f45d....
When I paste this in Chrome it loads Jupyter.
While this gets the job done it seems less than ideal for an entire team of analysts to have to do this each time. We also have RStudio installed on the same server but that simply opens from Chrome using a URL since I assume it is always running in the background. Jupyter and Anaconda seem to only run once they are kicked off first in PUTTY and I would like a way to bypass those steps.
I am familiar with the Jupyter config file however my limited understanding as a non-developer tells me it applies only to each user and cannot be applied to all users simultaneously (i.e. as a root user on the server or something to that effect).
I am hoping someone here might point me in the right direction. I should also point out that as a Redhat user I can't follow instructions based in Ubuntu since that syntax seems different.
Many thanks for the help.
Yoni
A convenient way is to run jupyter notebook --no-browser --port=12345 on your server while connecting through the ssh tunel as ssh -N -f -L 12345:localhost:12345 myserveralias. Now jupyter is on your 12345 localhost. Things like AutoSSH or Keep Alive will help with an erratic network, however, take security into account.
Is it possible to run dash apps within Google Cloud Datalab?
It says it's running on specific port, but everytime I try to open it, the connection gets refused. It works fine when I try to run it in Jupyter notebook.
What could be the source of the problem?
I have been trying to set up a scheduled python code on google compute. I am a beginner in cloud platform usage and having some trouble running python like I do on my local machine.
So I start the v instance and the ssh cloud shell. I then navigate to the folder where my main function is and then nohup python main.py & on the shell
This module is supposed to scrape the web, and then store the results in google spreadsheet using gspread package. I log off after running the above command.
After running for about 20 minutes or so, it says I have no active cloud shell associated with my login, hence it stops updating the results.
Can anyone help me with this problem?
Python - 3.4, scheduler package - 'schedule'
Tried approaches: Tmux, screen (detach), nohup, edited the ssh_config to add ServerAliveInterval and so on. None of them helps in persisting background process on closing the ssh shell.