I'm using jaydebeapi to connect to an oracle DB. The code is as follows:
host = [address]
port = "1521"
sid = "ctginst1"
database = "oracle"
drivertype = "thin"
uid = [user]
pwd = [pass]
driver_class = "oracle.jdbc.OracleDriver"
driver_file = "ojdbc10.jar"
connection_string="jdbc:{}:{}#{}:{}:{}".format(database, drivertype, host, port, sid)
conn=jaydebeapi.connect(driver_class, connection_string, [uid, pwd], driver_file, )
However this fails and gives me an error:
java.lang.RuntimeException: Class oracle.jdbc.OracleDriver not found
Edit:
By passing CLASSPATH with the .jar's location when starting JVM and only then attempting the connection I managed to proceed further with
import jpype
jpype.startJVM(jpype.getDefaultJVMPath(), '-Djava.class.path=%s' % driver_file)
And now I am getting java.sql.SQLException: Invalid Oracle URL specified error
Okay so from there there was apparently a colon missing before the "#".
The full successful connection code looks like:
import jaydebeapi
import jpype
host = host
port = "1521"
sid = "ctginst1"
database = "oracle"
drivertype = "thin"
uid = user
pwd = password
driver_class = "oracle.jdbc.OracleDriver"
driver_file = "C:\ojdbc8.jar"
connection_string="jdbc:{}:{}:#{}:{}:{}".format(database, drivertype, host, port, sid)
jpype.startJVM(jpype.getDefaultJVMPath(), '-Djava.class.path=%s' % driver_file)
conn=jaydebeapi.connect(driver_class, connection_string, [uid, pwd])
Related
I'm using python Jupyter-Lab inside a Docker Conteiner, which is embedded in an AWS EC-2. This Docker Container has an Instant Oracle Cliente installed inside it, so everything is set. The problem is that I'm still having trouble to connect this Docker to my AWS RDS with an Oracle Database, but only using SQLAlchemy.
When I try the connection using cx-Oracle==8.2.1 engine:
host = '***********************'
user = '*********'
password = '**********'
port = '****'
service = '****'
dsn_tns = cx_Oracle.makedsn(host,
port,
service)
engine_oracle = cx_Oracle.connect(user=user, password=password, dsn=dsn_tns)
Everything works fine. I can read tables using pandas read_sql(), I can create tables using cx_Oracle execute(), etc.
But when I try to take a DataFrame and send it to my RDS using pandas to_sql(), my cx_Oracle connection returns the error:
DatabaseError: ORA-01036: illegal variable name/number
I then tried to use a SQLAlchemy==1.4.22 engine from the string:
tns = """
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = %s)(PORT = %s))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = %s)
)
)
""" % (host, port, service)
engine_alchemy = create_engine('oracle+cx_oracle://%s:%s#%s' % (user, password, tns))
But I get this error:
DatabaseError: ORA-12154: TNS:could not resolve the connect identifier specified
And I keep getting this error even when I try to use pandas read_sql with the SQLAlchemy engine. Thus, I ran out of options. Can somebody help me please?
EDIT*
I tried again with SQLAlchemy==1.3.9 and it worked. Does anybody knows why?
The code I'm using for reading and sending a test table from and to Oracle is:
sql = """
SELECT
*
FROM
DADOS_MIS.DR_ACIO_ATIVOS_HASH
WHERE
ROWNUM <= 5"""
df = pd.read_sql(sql, engine_oracle)
dtyp1 = {c:'VARCHAR2('+str(df[c].str.len().max())+')'
for c in df.columns[df.dtypes == 'object'].tolist()}
dtyp2 = {c:'NUMBER'
for c in df.columns[df.dtypes == 'float64'].tolist()}
dtyp3 = {c:'DATE'
for c in df.columns[df.dtypes == 'datetime'].tolist()}
dtyp4 = {c:'NUMBER'
for c in df.columns[df.dtypes == 'int64'].tolist()}
dtyp_total = dtyp1
dtyp_total.update(dtyp2)
dtyp_total.update(dtyp3)
dtyp_total.update(dtyp4)
df.to_sql(name='teste', con=engine_oracle, if_exists='replace', dtype=dtyp_total, index=False)
The dtyp_total is:
{'IDENTIFICADOR': 'VARCHAR2(32)',
'IDENTIFICADOR_PRODUTO': 'VARCHAR2(32)',
'DATA_CHAMADA': 'VARCHAR2(19)',
'TABULACAO': 'VARCHAR2(25)'}
I want to get the column names in redshift using python boto3
Creaed Redshift Cluster
Insert Data into it
Configured Secrets Manager
Configure SageMaker Notebook
Open the Jupyter Notebook wrote the below code
import boto3
import time
client = boto3.client('redshift-data')
response = client.execute_statement(ClusterIdentifier = "test", Database= "dev", SecretArn= "{SECRET-ARN}",Sql= "SELECT `COLUMN_NAME` FROM `INFORMATION_SCHEMA`.`COLUMNS` WHERE `TABLE_SCHEMA`='dev' AND `TABLE_NAME`='dojoredshift'")
I got the response but there is no table schema inside it
Below is the code i used to connect I am getting timed out
import psycopg2
HOST = 'xx.xx.xx.xx'
PORT = 5439
USER = 'aswuser'
PASSWORD = 'Password1!'
DATABASE = 'dev'
def db_connection():
conn = psycopg2.connect(host=HOST,port=PORT,user=USER,password=PASSWORD,database=DATABASE)
return conn
How to get the ip address go to https://ipinfo.info/html/ip_checker.php
pass your hostname of redshiftcluster xx.xx.us-east-1.redshift.amazonaws.com or you can see in cluster page itself
I got the error while running above code
OperationalError: could not connect to server: Connection timed out
Is the server running on host "x.xx.xx..xx" and accepting
TCP/IP connections on port 5439?
I fixed with the code, and add the above the rules
import boto3
import psycopg2
# Credentials can be set using different methodologies. For this test,
# I ran from my local machine which I used cli command "aws configure"
# to set my Access key and secret access key
client = boto3.client(service_name='redshift',
region_name='us-east-1')
#
#Using boto3 to get the Database password instead of hardcoding it in the code
#
cluster_creds = client.get_cluster_credentials(
DbUser='awsuser',
DbName='dev',
ClusterIdentifier='redshift-cluster-1',
AutoCreate=False)
try:
# Database connection below that uses the DbPassword that boto3 returned
conn = psycopg2.connect(
host = 'redshift-cluster-1.cvlywrhztirh.us-east-1.redshift.amazonaws.com',
port = '5439',
user = cluster_creds['DbUser'],
password = cluster_creds['DbPassword'],
database = 'dev'
)
# Verifies that the connection worked
cursor = conn.cursor()
cursor.execute("SELECT VERSION()")
results = cursor.fetchone()
ver = results[0]
if (ver is None):
print("Could not find version")
else:
print("The version is " + ver)
except:
logger.exception('Failed to open database connection.')
print("Failed")
I am trying to connect to a remote Oracle database using Python.
I am able to access the database directly using DBeaver and I have copied the parameters in the Python code below from the "Connection Configuration --> Connection settings --> General" tab (which can be opened by right-clicking on the database and selecting "Edit connection"):
import cx_Oracle
host_name = # content of "Host"
port_number = # content of "Port"
user_name = # content of "User name"
pwd = # content of "Password"
service_name = # content of "Database" (the "Service Name" option is selected)
dsn_tns = cx_Oracle.makedsn(host_name, port_number, service_name = service_name)
conn = cx_Oracle.connect(user = user_name, password = pwd, dsn = dsn_tns)
However, I get the following error:
DatabaseError: ORA-12541: TNS:no listener
Other answers I found related to this question suggested modifying some values inside the listener.ora file, but I have no such file on my computer nor do I know where it can be retrieved. Does anyone have any suggestions?
There would be two reason for that error.
The database was briefly unavailable at the time when you tried to access
The Oracle client application on your machine is not configured correctly
I think thi config is not correct.
See the link : https://oracle.github.io/python-cx_Oracle/
ip = '192.168.1.1'
port = 1521
SID = 'YOURSIDHERE'
dsn_tns = cx_Oracle.makedsn(ip, port, SID)
db = cx_Oracle.connect('username', 'password', dsn_tns)
I am connecting to a mongodb server on EC2. The mongo collections require authentication to connect.
I tried everything but I am getting the following error and can't seem to correct it.
from pymongo import MongoClient
mongo_username = "username"
mongo_password = "password"
ssh_user = "user"
ssh_address = "ec2-**********.amazonaws.com"
ssh_port = 22
private_key = "path/to/key/mykey.pem"
def connect_to_mongo():
try:
client = MongoClient("mongodb://"+mongo_username+":"+mongo_password+"#" + ssh_address, ssl = True, ssl_keyfile = private_key)
db = client.myDB
#Should 'admin' be there or 'myDB'? 'admin' at least get if(auth) passed, while 'myDB' doesn't
auth = client.admin.authenticate(mongo_username,mongo_password)
if(auth):
print "MongoDB connection successful"
col = db.myCollection.count()
else:
print "MongoDB authentication failure: Please check the username or password"
client.close()
except Exception as e:
print "MongoDB connection failure: Please check the connection details"
print e
if __name__ == "__main__":
connect_to_mongo()
Output :
MongoDB connection successful
MongoDB connection failure: Please check the connection details
SSL handshake failed: EOF occurred in violation of protocol (_ssl.c:590)
EC2 will close 27017 port by default. Create the in-bound rule as described here and here.
I tried all the options and finally this worked.
client = MongoClient("mongodb://" + ssh_address+":27017") # No private key passing
auth = client.myDB.authenticate(mongo_username,mongo_password) # Authenticate to myDB and not admin
db = client.myDB
So basically I don't need to pass a private key (which is required when doing ssh over to the EC2) since the port was already opened for all incoming IPs ( I guess this was an important fact that I should have known and posted in the question).
Also I was trying to authenticate via the admin DB, which I shouldn't have done because I was given access to myDB only.
I have a .ini (configuration file) where I have mentioned the server name, Database Name, UserName and Password with which I can connect my app to the MSSQL
self.db = pyodbc.connect(
'driver={SQL Server};server=homeserver;database=testdb;uid=home;pwd=1234')`
corresponding data mentioned above connect statement is now in config.ini
self.configwrite = ConfigParser.RawConfigParser()
configread = SafeConfigParser()
configread.read('config.ini')
driver = configread.get('DataBase Settings','Driver')
server = str(configread.get('DataBase Settings','Server'))
db = str(configread.get('DataBase Settings','Database'))
user = str(configread.get('DataBase Settings','Username'))
password = str(configread.get('DataBase Settings','Password'))'
How can I pass these variables in the pyodbc connect statement?
I tried this:
self.db = pyodbc.connect('driver={Driver};server=server;database=db;uid=user;pwd=password')
But I am getting an error.
Other options for the connect function:
# using keywords for SQL Server authentication
self.db = pyodbc.connect(driver=driver, server=server, database=db,
user=user, password=password)
# using keywords for Windows authentication
self.db = pyodbc.connect(driver=driver, server=server, database=db,
trusted_connection='yes')
self.db = pyodbc.connect('driver={%s};server=%s;database=%s;uid=%s;pwd=%s' % ( driver, server, db, user, password ) )
%s is used to include variables into the string
the variables are placed into the string according to their order after the %
Mixing strings and input variable in sql connection string using pyodbc library - Python
inspired of this answer
`conn=pyodbc.connect('Driver={SQL Server};'
'Server='+servername+';'
'Database=master;'
'UID=sa;'
'PWD='+pasword+';'
)`