I am working on a project in Python that will create an Amazon ec2 instance, and establish a SSH and SFTP connection to transfer files and commands between my machine and ec2 instance.
So I began to code, I coded the function that creates an ec2 instance using boto3 library.
# creating a file named sefa.pem that will store the private key
outfile = open('sefa.pem', 'w')
keypair = ec2.meta.client.create_key_pair(KeyName='sefakeypair') # creates key pair
keyout= str(keypair['KeyMaterial']) # reads the key material
outfile.write(keyout) # writes the key material in sefa.pem
# creates the instance finally
response = ec2.create_instances(ImageId='ami-34913254', MinCount=1, MaxCount=1, InstanceType='t2.micro')
After that, I should establish a SSH Connection between my machine and ec2 instance to send command and I also should transfer and bring back files between my machine and ec2 instance.
After research, I found out that there is a Python library called piramiko for establishing SSH Connection and SFTP Connection between my computer and ec2 instance.
I tried to establish a SSH Connection between my computer and ec2 instance, but I have been facing with the "[Errrno 110]Connection Timed Out Error" for a day. I have been searching the internet for hours, but I couldn't find anything useful.
Here is the code that emerges "Connection Time Out Error":
con = paramiko.SSHClient() # ssh client using paramiko library
con.set_missing_host_key_policy(paramiko.AutoAddPolicy()) # this is needed because of adding policy automautically
k = paramiko.RSAKey.from_private_key_file("sefa.pem") # k reads sefa.pem and stores private key
time.sleep(30) # added this because ec2 should do 2/2 checks before connecting
print("connecting")
con.connect(hostname=PUB_DNS, username="ubuntu", pkey=k, look_for_keys=True) # HERE IS THE ERROR, I CAN'T CONNECT
print("connected")
stdin, stdout, stderr = con.exec_command('echo "TEST"')
print(stdout.readlines())
con.close()
I can not go any further without establishing a connection between my machine and ec2 instance.
Do you have any suggestions to solve this problem?
Is there any alternative library to piramiko?
I managed to solve the problem. The problem is my ec2 instance. These solved the issue:
Make sure that instance's security group has ssh deamon and allows you to connect.
Make sure that you have the keypair that you created while creating the instance.
Make sure that you execute chmod 400 keypair.pem
I was facing the same error and here is how I solved it :-
Install openssh-client on your client VM and openssh-server on your server VM.
Don't execute ssh#ip address as this will log you in to your host vm and then IP on both your client and server will be the same, this is the main reason for the error.
3.Instead use
ssh-keyscan ip_address >> ~/.ssh/known_hosts
So know host key is in known hosts and original IP remains.
Related
Context
I have a private server, reachable by using a public server as a proxy
|------| |------| |-------|
|Remote| -> |Public| -> |Private|
|------| |------| |-------|
I can connect to the private server (ssh keys are correctly set up) with
user#remote:$ ssh user#public
user#public:$ ssh user#private
user#private:$
Or in one line:
user#remote:$ ssh -o ProxyCommand='ssh -W %h:%p user#public' user#private
Problem:
Now, I wish to be able to send RPyC requests from the remote machine directly to the private server.
As an insight for why I need it: the remote machine has a camera while the private server has gpus (and there is a good connection between the two)
What I've tried so far
I managed to run a SSL connection as in RPyC SSH connection
conn = rpyc.ssl_connect("private", port = 12345, keyfile="/path/to/my.key", certfile="/path/to/my.cert")
with key and certificate obtained with something like Create a self signed X509 certificate in Python.
Now, it works IF the client has been launched from the public server. I don't know how to redirect the SSL connection from the remote machine.
Something else that I have tried is to declare a plumbum SshMachine as the Zero-Deploy tutorial indicate (https://rpyc.readthedocs.io/en/latest/docs/zerodeploy.html)
mach = SshMachine("user#private", ssh_opts=["-o ProxyCommand='ssh -W %h:%p user#public'"]
I can launch a Zero-Deploy server using this, but this is not satisfying because it uses a fresh (temporary) copy of python and I need to use the installed libraries from private server (e.g. cuda setup).
Of course, I cannot combine the two approaches since ssl_connect requires a string as hostname and raises an exception if given a SshMachine.
Constraints
I don't have root access neither to private nor public servers, but any library that can be installed with pip is ok. I have tried looking e.g. at paramiko but I am not sure where to start...
Update
I found a solution (see answer https://stackoverflow.com/a/68535406/6068769), but I still have a few questions so I don't accept it yet:
I had to remove the authenticator argument from Threaded server. What is the syntax (client+server) to add one with the ssh connection pipeline?
For the solution to work, I need to already have a ssh connection opened between remote and private server in another terminal (ssh -o ....). Otherwise, the SshMachine refuses to connect with the following errors:
plumbum.machine.session.SSHCommsError: SSH communication failed
Return code: | 255
Command line: | 'true '
stderr: | /bin/bash: line 0 : exec: ssh -W private:22 user#public : not found
I can live with opening the connection beforehand but it would be cleaner if I don't have to.
Is there another solution with SSL protocol?
Ok, I was not far, I just missed the method rpyc.ssh_connect.
Here is the MWE:
## Server
import rpyc
class MyService(rpyc.Service):
def on_connect(self, conn):
pass
def on_disconnect(self, conn):
pass
def exposed_some_computations(self, input):
return 2*input
if __name__ == "__main__":
from rpyc.utils.server import ThreadedServer
server = ThreadedServer(MyService, port=12345)
server.start()
## Client
from plumbum import SshMachine
import rpyc
mach = SshMachine("user#private", ssh_opts=["-o ProxyCommand='ssh -W %h:%p user#public'"])
conn = rpyc.ssh_connect(mach, 12345)
result = conn.root.exposed_some_computations(18)
I have a python application where I'm trying to access a MySQL database on Google's cloud service.
I've been following this set up guide for connecting via an external application (Python) and I am using the pymysql package. I'm attempting to connect via the proxy and have already authenticated my connection via gcloud auth log in from the console.
As of now, I CAN access the database via the console, but I need to be able to make queries from my python script to build it out. When I try running it as is, I get the following error:
OperationalError: (2003, "Can't connect to MySQL server on '34.86.47.192' (timed out)")
Here's the function I'm using, with security sensitive info starred out:
def uploadData():
# cd to the directory with the MySQL exe
os.chdir('C:\\Program Files\\MySQL\\MySQL Server 8.0\\bin')
# Invoke the proxy
subprocess.call('start cloud_sql_proxy_x64.exe -instances=trans-cosine-289719:us-east4:compuweather', shell=True)
# Create connection
# I have also tried host = '127.0.0.1' for localhost here
conn = pymysql.connect(host='34.86.47.192',
user='root',
password='*******',
db='gribdata')
try:
c = conn.cursor()
# Use the right databse
db_query = 'use gribdata'
c.execute(db_query)
query = 'SELECT * FROM clients'
c.execute(query)
result = c.fetchall()
print(result)
except Error as e:
print(e)
finally:
conn.close()
Yeah, this one's pretty limited in documentation, but what you want to do is run it from it's hosted IP and configure access to your external IP address on your server. So you want use that IP (34.xxx.xxx.xxx) rather than the loopback 127 local host IP.
To get it to work, you want to go to your connections tab and add a new connection within Gcloud. Make sure the public address box is checked, the IP is correct, and you save once done.
There's some excellent details here from some Gcloud engineers. Looks like some of the source documentation is outdated and this is the way to connect now.
First of all, confirm that the Cloud SQL proxy is indeed installed in the directory that you are expecting it to be. The Cloud SQL proxy is not part of MySQL Server, hence you should not find it in C:\\Program Files\\MySQL\\MySQL Server 8.0\\bin, at least by default. Instead, the Cloud SQL proxy is a tool provided by Google and is just an .exe file that can be stored in any directory you wish. For instructions on how to download the Proxy you can check the docs
The Cloud SQL proxy creates a secure link between the Cloud SQL instance and your machine. what it does is forward a local port in your machine to the Cloud SQL instance. Thus, the host IP that you should use if you are using the proxy is 127.0.0.1
conn = pymysql.connect(host='127.0.0.1',
user='root',
password='*******',
db='gribdata')
When starting the Cloud SQL Proxy with TCP socket, you should add the port to which you want to forward Cloud SQL's traffic at the end of the start command =tcp:3306
subprocess.call('start cloud_sql_proxy_x64.exe -instances=trans-cosine-289719:us-east4:compuweather=tcp:3306', shell=True)
Have you tried to connect CloudSQL from the console? Once you connected, you should get a message in the console displaying "Listening on 127.0.0.1:3306".Your connection command should be
"cloud_sql_proxy_x64.exe -instances=trans-cosine-289719:us-east4:compuweather=tcp:3306"
Try to connect cloud proxy from the console and try to create a connection with pymysql. Use "127.0.0.1".
I'm trying to create a Python connection to a remote server through an SSH Jump Host (one I've successfully created in Oracle SQL Developer) but can't replicate in Python. Can connect to SSH Host successfully but fail to forward the port to the remote server due to timeout or error opening tunnels. Safe to assume my code is incorrect rather than server issues. Also need a solution that doesn't use the "with SSHTunnelForwarder() as server:" approach because I need a continuous session similar to OSD/cx_Oracle session rather than a batch processing function.
Similar examples provided here (and elsewhere) using paramiko, sshtunnel, and cx_Oracle haven't worked for me. Many other examples don't require (or at least clearly specify) separate login credentials for the remote server. I expect the critical unclear piece is which local host + port to use, which my SQL Developer connection doesn't require explicitly (although I've tried using the ports OSD chooses, not at the same time).
Closest match I think was best answer from paramiko-port-forwarding-around-a-nat-router
OSD Inputs
SSH Host
- host = proxy_hostname
- port = proxy_port = 22
- username = proxy_username
- password = proxy_password
Local Port Forward
- host = remote_hostname
- port = remote_port = 1521
- automatically assign local port = True
Connection
- username = remote_username
- password = remote_password
- connection type = SSH
- SID = remote_server_sid
Python Code
i.e., analogous code from paramiko-port-forwarding-around-a-nat-router
import paramiko
from paramiko import SSHClient
# Instantiate a client and connect to the proxy server
proxy_client = SSHClient()
proxy_client.connect(
proxy_hostname,
port=proxy_port,
username=proxy_username,
password=proxy_password)
# Get the client's transport and open a `direct-tcpip` channel passing
# the destination hostname:port and the local hostname:port
transport = proxy_client.get_transport()
dest_addr = (remote_hostname,remote_port)
local_addr = ('localhost',55587)
channel = transport.open_channel("direct-tcpip", dest_addr, local_addr)
# Create a NEW client and pass this channel to it as the `sock` (along
# with whatever credentials you need to auth into your REMOTE box
remote_client = SSHClient()
remote_client.connect(
'localhost',
port=55587,
username=remote_username,
password=remote_password,
sock=channel)
Rather than a connection to the remote server I get
transport.py in start_client()
SSHException: Error reading SSH protocol banner
Solution
Finally figured out a solution! Analogous to OSD's automatic local port assignment and doesn't require SSHTunnelForwarder's with statement. Hope it can help someone else- use the question's OSD input variables with...
from sshtunnel import SSHTunnelForwarder
import cx_Oracle
server=SSHTunnelForwarder(
(proxy_hostname,proxy_port),
ssh_username=proxy_username,
ssh_password=proxy_password,
remote_bind_address=(remote_hostname,remote_port))
server.start()
db=cx_Oracle.connect('%s/%s#%s:%s/%s'%(remote_username,remote_password,'localhost',server.local_bind_port,remote_server_sid))
# do something with db
server.close()
I have Neo4J running on a Docker container in which I have mapped the internal container ports 7473 and 7687 to their respective host ports 7473 and 7687, 7474 is exposed but not mapped.
The Neo4J server configuration regarding network.
# Bolt connector dbms.connector.bolt.enabled=true
#dbms.connector.bolt.tls_level=OPTIONAL
dbms.connector.bolt.listen_address=0.0.0.0:7687
# HTTP Connector. There must be exactly one HTTP connector.
dbms.connector.http.enabled=true
dbms.connector.http.listen_address=0.0.0.0:7474
# HTTPS Connector. There can be zero or one HTTPS connectors.
dbms.connector.https.enabled=true
dbms.connector.https.listen_address=0.0.0.0:7473
I was able to login to Neo4J's webclient through the browser and change the default password.
Regarding the Python code here's the line where I create the client.
self.client = py2neo.Graph(host =ip_address,
username=username,
password=password,
secure =use_secure,
bolt =use_bolt)
As soon as I execute a query like this one.
node = Node("FooBar", foo="bar")
self.client.create(node)
I get the following Unauthorized exception.
py2neo.database.status.Unauthorized: https://localhost:7473/db/data/
Any idea on why this may be happening?
The solution was to call a separate authentication method provided by the library like this:
auth_port = str(self._PORT_HTTPS if use_secure else self._PORT_HTTP)
py2neo.authenticate(":".join([ip_address, auth_port]), username, password)
It took me a while to get to this because at first, I thought the authentication was done automatically in the constructor and then I wasn't able to make the authentication method run because I was using the bolt port.
I have set up a redis server on AWS ec2 instance following https://medium.com/#andrewcbass/install-redis-v3-2-on-aws-ec2-instance-93259d40a3ce
I am running a python script on another ec2 instance
import redis
try:
conn = redis.Redis(host=<private ip address>,port=6379, db=1)
user = {"Name":"Pradeep", "Company":"SCTL", "Address":"Mumbai", "Location":"RCP"}
conn.hmset("pythonDict", user)
conn.hgetall("pythonDict")
except Exception as e:
print(e)
In the security groups of the redis server, i have allowed inbound traffic on port 6379
While running the above script, i am getting following error:
Error 111 connecting to 172.31.22.71:6379. Connection refused.
I have already tried changing the bind value in conf file, as suggested by a few answers to similar questions on stack overflow, but it didn't work
Assuming your other instance is within the same subnet as the Redis instance, my suggestion would be to review a couple of things:
Make sure among your security group inbound rules, you have your Redis port set up for the subnet, like:
6379 (REDIS) 172.31.16.0/20
From within your Redis configuration (e.g. /etc/redis/redis.conf), in case this hasn't been done, either bind the server to the private IP (bind 172.31.22.71) or simply comment out any existing localhost binding, then restart Redis.