Cannot connect remotely to postgresql database hosted on amazon ec2 instance - python

I have a postgresql database hosted on my ec2 instance. I am trying to connect to it using Python from my local computer.
import psycopg2
try:
connection = psycopg2.connect(user="postgres",
password="<password>",
host="ec2-***-***-***-***.***-***-1.compute.amazonaws.com",
port="5432",
database="<db_name>")
cursor = connection.cursor()
cursor.execute(f"SELECT * FROM <tablename>;")
record = cursor.fetchall()
print(record, "\n")
except (Exception, psycopg2.Error) as error:
print("Error while connecting to PostgreSQL", error)
finally:
try:
if connection:
cursor.close()
connection.close()
print("PostgreSQL connection is closed")
except:
print("no work")
But I get
Error while connecting to PostgreSQL could not connect to server: Connection timed out (0x0000274C/10060)
Is the server running on host "ec2-***-***-***-***.***-***-1.compute.amazonaws.com" (**.**.**.***) and accepting
TCP/IP connections on port 5432?
no work
My pg_hba.conf file looks like this
# TYPE DATABASE USER ADDRESS METHOD
# "local" is for Unix domain socket connections only
local all all trust
# IPv4 local connections:
host all all 127.0.0.1/32 md5
host all all 0.0.0.0/0 md5
# IPv6 local connections:
host all all ::1/128 trust
# Allow replication connections from localhost, by a user with the
# replication privilege.
#local replication postgres peer
#host replication postgres 127.0.0.1/32 ident
#host replication postgres ::1/128 ident
and my postgresql.conf file looks like
#------------------------------------------------------------------------------
# CONNECTIONS AND AUTHENTICATION
#------------------------------------------------------------------------------
# - Connection Settings -
listen_addresses = '*' # what IP address(es) to listen on;
# comma-separated list of addresses;
# defaults to 'localhost'; use '*' for all
# (change requires restart)
port = 5432 # (change requires restart)
I have an ec2 Security Group:
What am I doing wrong? I am a complete newbie, any help appreciated!

Connection timeout usually indicates that you cannot reach the host or are being blocked by a firewall.
First try to ping the host from your machine. If it pings all right, then there is probably a firewall between you and your EC2 instance. Your EC2 security screenshot looks right to me.
Are you behind a firewall that might be blocking outbound sessions from your local computer to the Internet?
After some troubleshooting over chat, we found that the AWS Security Group allowing TCP/5432 inbound was not assigned to the EC2 instance.

There are many possible causes, so I would start with a trusted DB client like DBeaver and attempt to make the connection from your local machine to rule out python issues.
Depending on your setup, you may have a second incoming firewall (iptables, etc) running inside your ec2 instance that needs to be configured or disabled.
Log into the ec2 console and see if you can connect to the server. Load a db client like pgsql inside ec2 and attempt to connect to the server with localhost:5432 as the target.
You may need to alter pg_hba.conf so that the server will generate log files, but that can tell you if the server is being reached, and what the problem is.

Related

Connecting Python to Redshift through SSH and SQLAlchemy

My current connection configuration is as follow, this is for redshift db
con=('postgresql://username:password#hostname:port/databasename')
server = SSHTunnelForwarder(
('ssh host', 22),
ssh_username="-",
ssh_password="-",
remote_bind_address=('db host', port)
)
server.start()
local_port = str(server.local_bind_port)
engine = sa.create_engine(con)
######## Reaches here then times out when reading the table
df_read = pd.read_sql_table('tablename',engine)
However, the Redshift database also has an SSH, which might be affecting the connection? It creates the engine but when reading the SQL in pd.read_sql_query I reach this error.
(psycopg2.OperationalError) could not connect to server: Connection timed out (0x0000274C/10060)
Is the server running on host "xxx" (xxx) and accepting
TCP/IP connections on port xxx?
(Background on this error at: http://sqlalche.me/e/e3q8)
You cannot SSH into a Redshift cluster but you can use SSL to secure the connection.
Amazon Redshift security overview
Configuring security options for connections

psycopg2 connecting to postgresql database - error

I have problem to connect to my postgreSQL database.
I have databasename, password, hostname, port and I use this:
conn_string = "host='localhost' dbname='my_database' user='postgres' password='secret'"
But I got error like this:
Is the server running on host "...." and accepting TCP/IP connections on port 5432
I don't know if I used correctly host, I insert the value of hostname.
What is the difference between hostname and host? Anyone could help me?
psycopg2.connect(dbname=dbname, user=user, password=password, host=postgres_address, port=postgres_port)
This is working example to connect, you must define early dbname, user, password, postgres_address. If you have connection error, you can use ping for testing connection and telnet for testing openning port. Or you can use Beaver for test connection to postgres server.
Most likely your database has a firewall, be sure to whitelist the IP you are trying to connect from.
Difference of host and hostname
The difference of host and hostname really depends on the context. In your context of psycopg2 and PostgreSQL connection, the host normally means the IP address of the PostgreSQL server or the resolvable name of the PostgreSQL server such as DNS name if it has. If you are running linux server, the output of command hostname is unlikely to work in your case.
psycopg2 connection
Your connection string looks OK. But I will suggest you to use below connection format:
import psycopg2
try:
connection = psycopg2.connect(user = "sysadmin",
password = "pynative##29",
host = "127.0.0.1",
port = "5432",
database = "postgres_db")
except (Exception, psycopg2.Error) as error :
print ("Error while connecting to PostgreSQL", error)
You should use the IP address as the host value.
Troubleshooting
In the case of connection error, you should use other tools to test the connection of PostgreSQL server such as psql, pgAdmin4 or DBeaver.
You can also use telnet or netcat tools to test the network connection of PostgreSQL server, such as
telnet PostgreSQL_ip_address 5432
nc -v PostgreSQL_ip_address 5432

Translate Oracle SQL Developer SSH Host w/ Local Port Forward Connection to Python

I'm trying to create a Python connection to a remote server through an SSH Jump Host (one I've successfully created in Oracle SQL Developer) but can't replicate in Python. Can connect to SSH Host successfully but fail to forward the port to the remote server due to timeout or error opening tunnels. Safe to assume my code is incorrect rather than server issues. Also need a solution that doesn't use the "with SSHTunnelForwarder() as server:" approach because I need a continuous session similar to OSD/cx_Oracle session rather than a batch processing function.
Similar examples provided here (and elsewhere) using paramiko, sshtunnel, and cx_Oracle haven't worked for me. Many other examples don't require (or at least clearly specify) separate login credentials for the remote server. I expect the critical unclear piece is which local host + port to use, which my SQL Developer connection doesn't require explicitly (although I've tried using the ports OSD chooses, not at the same time).
Closest match I think was best answer from paramiko-port-forwarding-around-a-nat-router
OSD Inputs
SSH Host
- host = proxy_hostname
- port = proxy_port = 22
- username = proxy_username
- password = proxy_password
Local Port Forward
- host = remote_hostname
- port = remote_port = 1521
- automatically assign local port = True
Connection
- username = remote_username
- password = remote_password
- connection type = SSH
- SID = remote_server_sid
Python Code
i.e., analogous code from paramiko-port-forwarding-around-a-nat-router
import paramiko
from paramiko import SSHClient
# Instantiate a client and connect to the proxy server
proxy_client = SSHClient()
proxy_client.connect(
proxy_hostname,
port=proxy_port,
username=proxy_username,
password=proxy_password)
# Get the client's transport and open a `direct-tcpip` channel passing
# the destination hostname:port and the local hostname:port
transport = proxy_client.get_transport()
dest_addr = (remote_hostname,remote_port)
local_addr = ('localhost',55587)
channel = transport.open_channel("direct-tcpip", dest_addr, local_addr)
# Create a NEW client and pass this channel to it as the `sock` (along
# with whatever credentials you need to auth into your REMOTE box
remote_client = SSHClient()
remote_client.connect(
'localhost',
port=55587,
username=remote_username,
password=remote_password,
sock=channel)
Rather than a connection to the remote server I get
transport.py in start_client()
SSHException: Error reading SSH protocol banner
Solution
Finally figured out a solution! Analogous to OSD's automatic local port assignment and doesn't require SSHTunnelForwarder's with statement. Hope it can help someone else- use the question's OSD input variables with...
from sshtunnel import SSHTunnelForwarder
import cx_Oracle
server=SSHTunnelForwarder(
(proxy_hostname,proxy_port),
ssh_username=proxy_username,
ssh_password=proxy_password,
remote_bind_address=(remote_hostname,remote_port))
server.start()
db=cx_Oracle.connect('%s/%s#%s:%s/%s'%(remote_username,remote_password,'localhost',server.local_bind_port,remote_server_sid))
# do something with db
server.close()

Mysql.connector to access remote database in local network Python 3

I used mysql.connector python library to make changes to my local SQL server databases using:
from __future__ import print_function
import mysql.connector as kk
cnx = kk.connect(user='root', password='password123',
host='localhost',
database='db')
cursor = cnx.cursor(buffered=True)
sql = "DELETE FROM examples WHERE id = 4"
number_of_rows = cursor.execute(sql)
cnx.commit()
cnx.close()
This works fine, but when i try the same code with a change only to the 'host' parameter, with something like,
host='xxx.xxx.xxx.xxx'
(where the IP is that of a server connected to my local network.), it won't update that particular data base in that server.
The error thrown is something like:
mysql.connector.errors.DatabaseError: 2003 (HY000): Can't connect to MySQL server on 'xx.xxx.x.xx' (10060)
Why wouldn't this work?
First, you must check if your local IP can acces to your remote server (check if you are an IP restriction on your server), after check if your mysql database use the default port or not, If not you must precise the port in your code.
Check if the database user you are using to connect to the database on the remote host has the correct access and privileges.
You can test this from the command line using:
mysql -u root -p password123 - h xxx.xxx.xxx.xxx db
If this does not work then debug as follow:
ping xxx.xxx.xxx.xxx. If host is reachable move on to next step, if not then this IP is blocked, not available or incorrect. Double check the IP and check that they are on the same network.
Check if mysqld is running on host. service mysqld restart. If it is move on to next step, if not start mysqld. If it does not want to start, install it, start the service and setup your database.
Telnet the specific port to see if the port is blocked. telnet xxx.xxx.xxx.xxx 3306. If this works, move on to the next step. If this does not work, check your IPTables and check if the port is open on the remote host.
Add a user to the mysql on the the host: https://dev.mysql.com/doc/refman/8.0/en/adding-users.html
Restart mysqld and try the command above again.

MySQLdb connection problems

I'm having trouble with the MySQLdb module.
db = MySQLdb.connect(
host = 'localhost',
user = 'root',
passwd = '',
db = 'testdb',
port = 3000)
(I'm using a custom port)
the error I get is:
Error 2002: Can't connect to local MySQL server through socket '/var/lib/mysql/mysql.sock' (2)
Which doesn't make much sense since that's the default connection set in my.conf.. it's as though it's ignoring the connection info I give..
The mysql server is definitely there:
[root#baster ~]# mysql -uroot -p -P3000
Enter password:
Welcome to the MySQL monitor. Commands end with ; or \g.
Your MySQL connection id is 19
Server version: 5.0.77 Source distribution
Type 'help;' or '\h' for help. Type '\c' to clear the buffer.
mysql> use testdb;
Database changed
mysql>
I tried directly from the python prompt:
>>> db = MySQLdb.connect(user='root', passwd='', port=3000, host='localhost', db='pyneoform')
Traceback (most recent call last):
File "", line 1, in
File "/usr/lib64/python2.5/site-packages/MySQLdb/__init__.py", line 74, in Connect
return Connection(*args, **kwargs)
File "/usr/lib64/python2.5/site-packages/MySQLdb/connections.py", line 169, in __init__
super(Connection, self).__init__(*args, **kwargs2)
_mysql_exceptions.OperationalError: (2002, "Can't connect to local MySQL server through socket '/var/lib/mysql/mysql.sock' (2)")
>>>
I'm confused... :(
Changing localhost to 127.0.0.1 solved my problem using MySQLdb:
db = MySQLdb.connect(
host = '127.0.0.1',
user = 'root',
passwd = '',
db = 'testdb',
port = 3000)
Using 127.0.0.1 forces the client to use TCP/IP, so that the server listening to the TCP port can pickle it up. If host is specified as localhost, a Unix socket or pipe will be used.
add unix_socket='path_to_socket' where path_to_socket should be the path of the MySQL socket, e.g. /var/run/mysqld/mysqld2.sock
Make sure that the mysql server is listening for tcp connections, which you can do with netstat -nlp (in *nix). This is the type of connection you are attempting to make, and db's normally don't listen on the network by default for security reasons. Also, try specifying --host=localhost when using the mysql command, this also try to connect via unix sockets unless you specify otherwise. If mysql is not configured to listen for tcp connections, the command will also fail.
Here's a relevant section from the mysql 5.1 manual on unix sockets and troubleshooting connections. Note that the error described (2002) is the same one that you are getting.
Alternatively, check to see if the module you are using has an option to connect via unix sockets (as David Suggests).
I had this issue where the unix socket file was some place else, python was trying to connect to a non-existing socket. Once this was corrected using the unix_socket option, it worked.
Mysql uses sockets when the host is 'localhost' and tcp/ip when the host is anything else. By default Mysql will listen to both - you can disable either sockets or networking in you my.cnf file (see mysql.com for details).
In your case forget about the port=3000 the mysql client lib is not paying any attention to it since you are using localhost and specify the socket as in unix_socket='path_to_socket'.
If you decided to move this script to another machine you will need to change this connect string to use the actual host name or ip address and then you can loose the unix_socket and bring back the port. The default port for mysql is 3306 - you don't need to specify that port but you will need to specify 3000 if that is the port you are using.
As far as I can tell, the python connector can ONLY connect to mysql through a internet socket: unix sockets (the default for the command line client) is not supported.
In the CLI client, when you say "-h localhost", it actually interprets localhost as "Oh, localhost? I'll just connect to the unix socket instead", rather than the internet localhost socket.
Ie, the mysql CLI client is doing something magical, and the Python connector is doing something "consistent, but restrictive".
Choose your poison. (Pun not intended ;) )
Maybe try adding the keyword parameter unix_socket = None to connect()?

Categories

Resources