pymongo - Unable to connect to mongodb running on EC2 - python

I am connecting to a mongodb server on EC2. The mongo collections require authentication to connect.
I tried everything but I am getting the following error and can't seem to correct it.
from pymongo import MongoClient
mongo_username = "username"
mongo_password = "password"
ssh_user = "user"
ssh_address = "ec2-**********.amazonaws.com"
ssh_port = 22
private_key = "path/to/key/mykey.pem"
def connect_to_mongo():
try:
client = MongoClient("mongodb://"+mongo_username+":"+mongo_password+"#" + ssh_address, ssl = True, ssl_keyfile = private_key)
db = client.myDB
#Should 'admin' be there or 'myDB'? 'admin' at least get if(auth) passed, while 'myDB' doesn't
auth = client.admin.authenticate(mongo_username,mongo_password)
if(auth):
print "MongoDB connection successful"
col = db.myCollection.count()
else:
print "MongoDB authentication failure: Please check the username or password"
client.close()
except Exception as e:
print "MongoDB connection failure: Please check the connection details"
print e
if __name__ == "__main__":
connect_to_mongo()
Output :
MongoDB connection successful
MongoDB connection failure: Please check the connection details
SSL handshake failed: EOF occurred in violation of protocol (_ssl.c:590)

EC2 will close 27017 port by default. Create the in-bound rule as described here and here.

I tried all the options and finally this worked.
client = MongoClient("mongodb://" + ssh_address+":27017") # No private key passing
auth = client.myDB.authenticate(mongo_username,mongo_password) # Authenticate to myDB and not admin
db = client.myDB
So basically I don't need to pass a private key (which is required when doing ssh over to the EC2) since the port was already opened for all incoming IPs ( I guess this was an important fact that I should have known and posted in the question).
Also I was trying to authenticate via the admin DB, which I shouldn't have done because I was given access to myDB only.

Related

Does psycopg2.connect inherit the proxy set in this context manager?

I have a Django app below that uses a proxy to connect to an external Postgres database. I had to replace another package with psycopg2 and it works fine locally, but doesn't work when I move onto our production server which is a Heroku app using QuotaguardStatic for proxy purposes. I'm not sure what's wrong here
For some reason, the psycopg2.connect part returns an error with a different IP address. Is it not inheriting the proxy set in the context manager? What would be
from apps.proxy.socks import Socks5Proxy
import requests
PROXY_URL = os.environ['QUOTAGUARDSTATIC_URL']
with Socks5Proxy(url=PROXY_URL) as p:
public_ip = requests.get("http://wtfismyip.com/text").text
print(public_ip) # prints the expected IP address
print('end')
try:
connection = psycopg2.connect(user=EXTERNAL_DB_USERNAME,
password=EXTERNAL_DB_PASSWORD,
host=EXTERNAL_DB_HOSTNAME,
port=EXTERNAL_DB_PORT,
database=EXTERNAL_DB_DATABASE,
cursor_factory=RealDictCursor # To access query results like a dictionary
) # , ssl_context=True
except psycopg2.DatabaseError as e:
logger.error('Unable to connect to Illuminate database')
raise e
Error is:
psycopg2.OperationalError: FATAL: no pg_hba.conf entry for host "12.345.678.910", user "username", database "databasename", SSL on
Basically, the IP address 12.345.678.910 does not match what was printed at the beginning of the context manager where the proxy is set. Do I need to set a proxy another method so that the psycopg2 connection uses it?

How to get the column names in redshift using Python boto3

I want to get the column names in redshift using python boto3
Creaed Redshift Cluster
Insert Data into it
Configured Secrets Manager
Configure SageMaker Notebook
Open the Jupyter Notebook wrote the below code
import boto3
import time
client = boto3.client('redshift-data')
response = client.execute_statement(ClusterIdentifier = "test", Database= "dev", SecretArn= "{SECRET-ARN}",Sql= "SELECT `COLUMN_NAME` FROM `INFORMATION_SCHEMA`.`COLUMNS` WHERE `TABLE_SCHEMA`='dev' AND `TABLE_NAME`='dojoredshift'")
I got the response but there is no table schema inside it
Below is the code i used to connect I am getting timed out
import psycopg2
HOST = 'xx.xx.xx.xx'
PORT = 5439
USER = 'aswuser'
PASSWORD = 'Password1!'
DATABASE = 'dev'
def db_connection():
conn = psycopg2.connect(host=HOST,port=PORT,user=USER,password=PASSWORD,database=DATABASE)
return conn
How to get the ip address go to https://ipinfo.info/html/ip_checker.php
pass your hostname of redshiftcluster xx.xx.us-east-1.redshift.amazonaws.com or you can see in cluster page itself
I got the error while running above code
OperationalError: could not connect to server: Connection timed out
Is the server running on host "x.xx.xx..xx" and accepting
TCP/IP connections on port 5439?
I fixed with the code, and add the above the rules
import boto3
import psycopg2
# Credentials can be set using different methodologies. For this test,
# I ran from my local machine which I used cli command "aws configure"
# to set my Access key and secret access key
client = boto3.client(service_name='redshift',
region_name='us-east-1')
#
#Using boto3 to get the Database password instead of hardcoding it in the code
#
cluster_creds = client.get_cluster_credentials(
DbUser='awsuser',
DbName='dev',
ClusterIdentifier='redshift-cluster-1',
AutoCreate=False)
try:
# Database connection below that uses the DbPassword that boto3 returned
conn = psycopg2.connect(
host = 'redshift-cluster-1.cvlywrhztirh.us-east-1.redshift.amazonaws.com',
port = '5439',
user = cluster_creds['DbUser'],
password = cluster_creds['DbPassword'],
database = 'dev'
)
# Verifies that the connection worked
cursor = conn.cursor()
cursor.execute("SELECT VERSION()")
results = cursor.fetchone()
ver = results[0]
if (ver is None):
print("Could not find version")
else:
print("The version is " + ver)
except:
logger.exception('Failed to open database connection.')
print("Failed")

Connection to Oracle database via ldap (python)

My problem is pretty complicated. I managed to create a simple python script to connect to the oracle database by the ldap connection but I keep wondering if this is well done, because i can't connect properly. Login, password, LDAP server, basedn works correctly because it works correctly in oracle sql developer app.
import ldap
#from ldap import open
if __name__ == "__main__":
ldap_server="xxxxxxxx.com:389:636"
username = "xxxxxx"
password= "xxxxxxx"
user_dn = "uid="+username+",dc=na,dc=xx,dc=com"
base_dn = "cn=xxxxxxx,dc=na,dc=xx,dc=com"
connect = ldap.open(ldap_server)
search_filter = "uid="+username
try:
connect.bind_s(user_dn,password)
result = connect.search_s(base_dn,ldap.SCOPE_SUBTREE,search_filter)
connect.unbind_s()
print(result)
except ldap.LDAPError:
connect.unbind_s()
print ("authentication error")
Can anyone help me because I'm losing my mind :( ?

global options in python ldap

I was playing with python ldap in console and got results which I can't explain. Hope somebody can clarify this for me.
open new python console
import ldap
certfile = '~/ad-server.test.loc.pem'
ldap.set_option(ldap.OPT_X_TLS_CACERTFILE, certfile)
who = 'CN=Administrator,CN=Users,dc=test,dc=loc'
passwd = 'passwd'
sslserver = 'ldaps://ad-server.test.loc:636'
#let's say I would like to disable certificate verification for the next connection
ldap.set_option(ldap.OPT_X_TLS_REQUIRECERT, ldap.OPT_X_TLS_ALLOW)
conn = ldap.initialize(server)
conn.simple_bind_s(who, passwd)
(97, [])
#connected successfully
#Now I want to enable certificate verification and try to connect again (this time I should
#fail because I use sef-signed certificate)
#Unbind connection
conn.unbind()
ldap.set_option(ldap.OPT_X_TLS_REQUIRE_CERT, ldap.OPT_X_TLS_DEMAND)
conn = ldap.initialize(server)
#Trying to connect
conn.simple_bind_s(who, passwd)
(97, [])
# it is also connected succesfully. Why?
Here is a question,
I turned on certificate verification so it should finish connection attempt with error but it did connection successfully ( I used self-signed certificate that is why attempt to connect should fail) ?
Another example. Do the same things but in different order
open new python console
import ldap
certfile = '~/ad-server.test.loc.pem'
ldap.set_option(ldap.OPT_X_TLS_CACERTFILE, certfile)
who = 'CN=Administrator,CN=Users,dc=test,dc=loc'
passwd = 'passwd'
sslserver = 'ldaps://ad-server.test.loc:636'
#Trying to connect using selfsigned certificate
ldap.set_option(ldap.OPT_X_TLS_REQUIRECERT, ldap.OPT_X_TLS_DEMAND)
conn = ldap.initialize(server)
conn.simple_bind_s(who, passwd)
Traceback bla bla bla
ldap.SERVER_DOWN: {'info': 'error:14090086:SSL routines:SSL3_GET_SERVER_CERTIFICATE:certificate verify failed', 'desc': "Can't contact LDAP server"}
#Ok, let's disable verefication and try again
conn.unbind()
ldap.set_option(ldap.OPT_X_TLS_REQUIRECERT, ldap.OPT_X_TLS_ALLOW)
conn = ldap.initialize(server)
conn.simple_bind_s(who, passwd)
Traceback bla bla bla
ldap.SERVER_DOWN: {'info': 'error:14090086:SSL routines:SSL3_GET_SERVER_CERTIFICATE:certificate verify failed', 'desc': "Can't contact LDAP server"}
# Even if I disabled verefication connection failed. Why? I expected a positive result.
Can anybody explain this?
We just ran in to a similar problem. Basically, all of the TLS options are set globally by default and stored in a context object used by GNUTLS. The first time a connection is created, that becomes the TLS context that will be used by all subsequent connections in that process.
To change this behavior, the very last TLS-related set_option call you make should be:
connection.set_option(ldap.OPT_X_TLS_NEWCTX, 0)
This is actually done in one of the python-ldap demos.

MySQL Connector in Python

Trying to connect to my MySQL DB on my VPS. I'm getting the following error, when using the below code;
(I've stripped out my credentials - however I know they work, as I use them for my PHP dev as well.
CODE
import mysql.connector as mysql
from mysql.connector import errorcode
try:
conn = mysql.connect( user= %USER%
,password=%PW%
,host = %HOST%
,database = %DB%
)
except mysql.Error as err:
if err.errno == errorcode.ER_ACCESS_DENIED_ERROR:
print("Something is wrong with your user name or password")
elif err.errno == errorcode.ER_BAD_DB_ERROR:
print("Database does not exists")
else:
print(err)
else:
conn.close()
ERROR MESSAGE
2003: Can't connect to MySQL server on '%HOST%:3306' (10060 A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond)
Connector That I'm using
http://cdn.mysql.com/Downloads/Connector-Python/mysql-connector-python-2.0.1-py3.4.msi
Any idea what I'm doing wrong?
Check the GRANT on your MySQL database. It might be that you haven't been GRANTed permission to connect from that host with the given username and password.
The MySQL error number is 2003. I usually find it helpful to do a Google search to see if others have had the problem:
http://dev.mysql.com/doc/refman/5.0/en/access-denied.html
Is there a firewall between the client machine and the database server? I would hope that port 3306 would be blocked by default. If that's true, you'll need to set up a firewall rule to allow access from the client IP to the database IP via port 3306.

Categories

Resources