Is it possible to see tensorboard over ssh? - python

I am running tensorflow code remotely on a ssh server. (e.g., ssh -X account#server.address)
On the remote server, it says You can navigate to http://0.0.0.0:6006.
In this case, how can I check tensorboard? How can I navigate the address of a remote machine?
I tried to search, but there seems no useful information.

0.0.0.0 is the wildcard address. Thus, you can use any address for the purpose unless the system's firewall is implementig something more restrictive.
That said, let's assume that it is implementing firewall-based restrictions (if it weren't, you could just access http://server.address:6006/ -- but so could anyone else). In that case:
ssh -L 16006:127.0.0.1:6006 account#server.address
...and then refer to http://127.0.0.1:16006/ in a local browser.

Related

Can't connect to GCP VM website with external IP

Trying to connect to my django website (from the browser) that's stored on the GCP virtual machine.
Everything works fine if I'm accessing it internally using internal IP or localhost.
However, I can't access website with external IP.
No logs in django that would say someone trying to access if I'm trying with external IP.
I have http, https traffic enabled on instance.
Firewall rule to allow port 80:
Here is Test-Net results.
Searched the web for answers but nothing looks wrong in my settings..
Any ideas would be appreciated.
UPDATE:
Do not create or change egress rules unless you know exactly what they do. They are not necessary for ingress rules (VPC Firewalls automatically allow return traffic):
I've changed all firewall rules back how they were so now only port 80 is allowed.
You have an ingress rule for the target http-server. Is that target flag set on the VM instance?
What is the output from sudo lsof -i -P -n | grep LISTEN? Your Django server must be listening on 0.0.0.0.0 instead of localhost.
I have 0.0.0.0 with port 80 at django terminal.
I use windows 2016 server so don't know the powershell function that would display what you asked for.
Here is netstat listening ports for django.

How to port forward from Eclipse Che instance to local machine?

Background
So after about a year of having a GoDaddy cloud service, and super disappointed with it from the get-go. Once they announced that they would be discontinuing Cloud Server services, it was like a sign from the heavens.
I then created a Google Cloud account. One of the biggest reasons I got a Cloud Server to begin with was to have an eclipse Che instance, an IDE wherever you are! I love it, but despite the temporary partnership between Bitnami and GoDaddy, launching a Eclipse instance with them with such a mind-numbing task since their internal Factory build still required a ton of Docker configurations...
And though I can appreciate the fact that I did learn the ins-and-outs of configuring Dockers Network settings, which is not something to wince at... As soon as I got my Google Cloud account it was simply a 1 2 3 and go!
Question
Whilst I'm running an Eclipse chat instance, what is the proper way to port-forward a given work space to my local machine? The scenario is simple...
I created a Python stack of which I am using Django but when I run server, of course default being the local IP to the project, I have yet to find the easy and more than likely existing standard way to run the Django server and have the eclipse Che create the URL to the project. I'm ninety-nine percent sure that I'm going about this the wrong way given the fact that even some of the demo stack projects with Node or Python are plug-and-play.
PS: I am able to ssh into the workspace no issue, I'm just confused on how to port forward from remote to local as I've only really done it the other way around.. ssh -R ... or -L?
What you need is SSH Tunnel, which is -L. If you need to send a port from local to server that is called a Reverse SSH Tunnel, which is -R.
so simple command
ssh -L <localport>:127.0.0.1:<remoteport> <user>#<server>
Some extension to the other answer mentioning ssh tunneling...
If you run a docker-dev on a server (e.g. 192.168.1.123) not being your local machine in eclipse-che that provides some web service you want to access, then find out the IP address of the docker-dev, e.g. by opening a terminal in your eclipse che workspace and executing ip addr. There you will see some 172.17.x.x that is accessible only from the server. Assume the service in docker-dev is listening on port 12345, then you need the following ssh port forwarding from your local machine to access it:
ssh -L 8888:172.17.0.2:12345 192.168.1.123
While the ssh connection is open, you can access the web service with you browser by accessing http://127.0.0.1:8888/

Connected to ssh session via my browser. How can I access my local files through the browser?

I'm connected to a VM on a private network at address 'abc.def.com' using ssh, and on that VM there's an application that hosts a Python web app (IPython Notebook) that I can access by pointing my local browser to 'abc.def.com:7777'.
From that web app I can call shell commands by preceding them with '!', for example !ls -lt will list the files in the VM current working directory. But since I'm using my own laptop's browser, I think I should be able to run shell commands on my local files as well. How would I do that?
If that's not possible, what Python/shell command can I run from within the web app to automatically get my laptop's IP address to use things like scp? I know how to get my IP address, but I'd like to create a program that will automatically enable scp for whoever uses it.
You have ssh access so you could possibly write a python function that would let you transfer files via scp the secure copy command which uses ssh to communicate. If you exchange keys with the server you wouldn't have to put in a password so I see no problem from that standpoint. The issue is if you have an address for your local machine to be accessed from the server.
I work on various remotes from my laptop all day and from my laptop to the sever I could have this function:
def scp_to_server(address, local_file, remote_file):
subprocess.call(['scp',local_file,"myusername#{}:{}".format(address, remote_file)])
that would copy a file from my local machine to the remote provided the paths were correct, I have permissions to copy the files, and my local machine's id_rsa.pub key is in the ~/.ssh/authorized_keys file on the remote.
I have no way to initiate a secure copy from the remote to my local machine however because I don't have an address to access the local machine from that I can "see" on the remote.
If I open the terminal on my laptop and run hostname I see mylaptop.local and on the remote I see remoteserver#where.i.work.edu but the first is a local address I can see it from other machines on my LAN at home, (because I have configured that) but I can't see mylaptop.local from the remote. I know there is a way to configure that so I could find my laptop at home from anywhere, but I never had the need to do that (since I bring the laptop with me) so I can't help you there. I think there are a few more hurdels to go-over than you would like.
You could implement the function above on your local machine and transfer the files that way though.

Can't access app deployed with docker and google cloud

I currently have a Linux Debian VM set up through Google Cloud Platform. I have docker installed and would like to start running application containers within it.
I'm following the documentation under Docker's website Found Here under
"Running a web application in Docker" I download the image and run it with no issue. I then run $sudo docker ps and get the port which is 0.0.0.0:32768->5000/tcp
I then try to browse to the website at http://"MyExternalVMIP":32768 but the applications doesn't come up. Am I missing something?
First, test to see if your service works at all. To do this, from the VM itself, run:
wget http://localhost:32768
or
curl http://localhost:32768
If that works, that means the service is operating properly, so let's move further with the debugging.
There may be two firewalls that are blocking external access to your docker process:
the VM's OS firewall
Google Compute Engine firewall
You can see if you're affected by the first issue by accessing the URL from the VM itself and from another VM on the same GCE network (use the VM name in the URL, not the external IP):
wget http://[vm-name]:32768
To fix the first issue, you would have to either open up the single port (recommended):
iptables -I INPUT -p tcp -s 0.0.0.0/0 --dport 32768 -j ACCEPT
or disable firewall entirely, e.g., by stopping iptables (not recommended).
If, after fixing this, you can access the URL from another host on the same GCE network, but still can't access it from outside of Google Compute Engine, you're affected by the second issue. To fix it, you will need to open the port in the GCE firewall; this can also be done via the web UI in the Developers Console.
Create an entry in your local ssh config file as below with specific local forward port. In my case its an example of yarn's IP, which I want to access in browser.
Host hadoop
HostName <External-IP>
User <Local-machine-username>
IdentityFile ~/.ssh/<private-key-for-above-user>
LocalForward 8089 <Internal-IP>:8088

How to easily create a local SSH tunnel using pure Python

I have a pretty simple task: using ssh, I want to create a tunnel that forwards traffic from my local machine to a specific port on a remote machine. I can do this from the command line:
ssh -N -L 123:127.0.0.1:456 user#remotehost
Then if I run:
telnet localhost 123
it's the equivalent of logging into remotehost and running
telnet 127.0.0.1 456
I've managed to do this with something along the lines of;
subprocess.Popen(['ssh', '-N', '-L', '%i:127.0.0.1:%i' % (new_port, old_port), ssh_user + '#' + ip_addr])
But now I want to move away from that and use only Python - no external processes.
I've tried using fabric.context_managers.remote_tunnel but unless I've misunderstood this is meant for creating a tunnel that starts at a remote location, not from the local machine. That is, it is the equivalent of SSHing into a remote machine and creating an SSH tunnel from there, which is silly for my purpose. I suppose I could set the remote host to actually be the local machine but this seems inefficient and honestly I don't even understand how to do that.
I've also tried forward.py on paramiko and it doesn't work because my private key is encrypted. I'd like to modify the script to handle that, and also just simplify it for my needs, but both the script and the paramiko library are daunting and I don't know how to begin.
Surely there's an easy way to do this? I seem to be so close yet so far.
What do you mean by "pure Python"? Subprocess is bundled with standard python installation.
Subprocess and Fabric are designed for such tasks, why would you want to move away from them?
If you have, minimal tasks to be performed remotely e.g. check memory, hostname, etc. you can go ahead with suprocess. However if you have some big requirements, I would suggest going with fabric.
For your purpose where you have to work on the same machine, why not use subprocess with check_call or Popen. As an alternative, you can change your code altogether so as to be able to get into Unix core to achieve what the Linux commands do.

Categories

Resources