Python + cx_Oracle : ORA-01017: invalid username/password; logon denied - python

I usually connect to OraClient18Home1 through Toad 12.6.0.53 so I can explore data and run my queries. We used TNSname.ora file to set up my connection in Toad(there is not syntax errors in TSNname)
Now, I would like to be able to connect to the Oracle database through a python IDE (I'm using Spyder). I have already installed cx_oracle. I'm using Python 3.9
Here are the TNSnames
FINDB=
(DESCRIPTION_LIST=
(FAILOVER=on)
(LOAD_BALANCE=off)
(DESCRIPTION=
(ADDRESS_LIST=
(LOAD_BALANCE=off)
(ADDRESS=
(PROTOCOL=TCP)
(HOST=finprd01-csscan.ca3.ocm.s8529456.oraclecloudatcustomer.com)
(PORT=1521)
)
)
(CONNECT_DATA=
(SERVICE_NAME=FINDB_rw.ca3.ocm.s8529456.oraclecloudatcustomer.com)
)
)
(DESCRIPTION=
(ADDRESS_LIST=
(LOAD_BALANCE=off)
(ADDRESS=
(PROTOCOL=TCP)
(HOST=finprd02-csscan.ca1.ocm.s7896852.oraclecloudatcustomer.com)
(PORT=1521)
)
)
(CONNECT_DATA=
(SERVICE_NAME=FINDB_rw.ca1.ocm.s7896852.oraclecloudatcustomer.com)
)
)
)
you can read the TSNname sript more clearly in the image
I used the same Username/password/port/host/service_name as the ones I use in Toad.
Here is my python code :
import cx_Oracle
p_password = "111111"
con = cx_Oracle.connect(user="User101", password=p_password ,dsn="finprd01-csscan.ca3.ocm.s8529456.oraclecloudatcustomer.com / FINDB_rw.ca3.ocm.s8529456.oraclecloudatcustomer.com", encoding="UTF-8")
print("Database version:", con.version)
print("Oracle Python version:", cx_Oracle.version)
When I run the code I get; ORA-01017: invalid username/password; logon denied.
I should mention that in the Cx_Oracle documentation, they use one Host and one Service_name. but if you look at my TNSname description I have 2 hosts and 2 service_names. I'm not a data architect I don't really understand why we have 2 hosts and 2 service_names whereas all the tutorials and blogs use just one each.
In my python connection, I selected randomly just one host and one service_name.
By the way, I was using this documentation https://cx-oracle.readthedocs.io/en/latest/user_guide/connection_handling.html
I used Easy Connect Syntax for Connection Strings, from the link above, this is the format
connection = cx_Oracle.connect(user="hr", password=userpwd,dsn="dbhost.example.com/orclpdb1",encoding="UTF-8")
What am I doing wrong?
I assumed that since I have access to the Oracle database through Toad. I should just re-use the same connection parameters from Toad (Username/password/port/host/service_name) and it will be easy to establish the connection from Python IDE (I'm using Spyder).
Do I need to contact admin for approval? I can connect the database through Toad, Maybe connecting through python will require permission?
You might suggest contacting Admin for help but they are short-staffed and I could only get an answer or someone over the phone in 2 weeks at the earliest. Therefore, I’m trying to do this myself. if you think contacting them is the only solution, I will have to do it then!
I would be very thankful for any tips or help.
Thanks

Related

Create a Java UDF that uses geoip2 library with the database in a S3 bucket

Correct me if i'm wrong, but my understanding of the UDF function in Snowpark is that you can send the function UDF from your IDE and it will be executed inside Snowflake. I have a staged database called GeoLite2-City.mmdb inside a S3 bucket on my Snowflake account and i would like to use it to retrieve informations about an ip address. So my strategy was to
1 Register an UDF which would return a response string n my IDE Pycharm
2 Create a main function which would simple question the database about the ip address and give me a response.
The problem is that, how the UDF and my code can see the staged file at
s3://path/GeoLite2-City.mmdb
in my bucket, in my case i simply named it so assuming that it will eventually find it (with geoip2.database.Reader('GeoLite2-City.mmdb') as reader:) since the
stage_location='#AWS_CSV_STAGE' is the same as were the UDF will be saved? But i'm not sure if i understand correctly what the option stage_location is referring exactly.
At the moment i get the following error:
"Cannot add package geoip2 because Anaconda terms must be accepted by ORGADMIN to use Anaconda 3rd party packages. Please follow the instructions at https://docs.snowflake.com/en/developer-guide/udf/python/udf-python-packages.html#using-third-party-packages-from-anaconda."
Am i importing geoip2.database correctly in order to use it with snowpark and udf?
Do i import it by writing session.add_packages('geoip2') ?
Thank You for clearing my doubts.
The instructions i'm following about geoip2 are here.
https://geoip2.readthedocs.io/en/latest/
my code:
from snowflake.snowpark import Session
import geoip2.database
from snowflake.snowpark.functions import col
import logging
from snowflake.snowpark.types import IntegerType, StringType
logger = logging.getLogger()
logger.setLevel(logging.INFO)
session = None
user = ''*********'
password = '*********'
account = '*********'
warehouse = '*********'
database = '*********'
schema = '*********'
role = '*********'
print("Connecting")
cnn_params = {
"account": account,
"user": user,
"password": password,
"warehouse": warehouse,
"database": database,
"schema": schema,
"role": role,
}
def first_udf():
with geoip2.database.Reader('GeoLite2-City.mmdb') as reader:
response = reader.city('203.0.113.0')
print('response.country.iso_code')
return response
try:
print('session..')
session = Session.builder.configs(cnn_params).create()
session.add_packages('geoip2')
session.udf.register(
func=first_udf
, return_type=StringType()
, input_types=[StringType()]
, is_permanent=True
, name='SNOWPARK_FIRST_UDF'
, replace=True
, stage_location='#AWS_CSV_STAGE'
)
session.sql('SELECT SNOWPARK_FIRST_UDF').show()
except Exception as e:
print(e)
finally:
if session:
session.close()
print('connection closed..')
print('done.')
UPDATE
I'm trying to solve it using a java udf as in my staging area i have the 'geoip2-2.8.0.jar' library staged already. If i could import it's methods to get the country of an ip it would be perfect, the problem is that i don't know how to do it exactly. I'm trying to follow these instructions https://maxmind.github.io/GeoIP2-java/.
I wanna interrogate the database and get as output the iso code of the country and i want to do it on snowflake worksheet.
CREATE OR REPLACE FUNCTION GEO()
returns varchar not null
language java
imports = ('#AWS_CSV_STAGE/lib/geoip2-2.8.0.jar', '#AWS_CSV_STAGE/geodata/GeoLite2-City.mmdb')
handler = 'test'
as
$$
def test():
File database = new File("geodata/GeoLite2-City.mmdb")
DatabaseReader reader = new DatabaseReader.Builder(database).build();
InetAddress ipAddress = InetAddress.getByName("128.101.101.101");
CityResponse response = reader.city(ipAddress);
Country country = response.getCountry();
System.out.println(country.getIsoCode());
$$;
SELECT GEO();
This will be more complicated that it looks:
To use session.add_packages('geoip2') in Snowflake you need to accept the Anaconda terms. This is easy if you can ask your account admin.
But then you can only get the packages that Anaconda has added to Snowflake in this way. The list is https://repo.anaconda.com/pkgs/snowflake/, and I don't see geoip2 there yet.
So you will need to package you own Python code (until Anaconda sees enough requests for geoip2 in the wishlist). I described the process here https://medium.com/snowflake/generating-all-the-holidays-in-sql-with-a-python-udtf-4397f190252b.
But wait! GeoIP2 is not pure Python, so you will need to wait until Anaconda packages the C extension libmaxminddb. But this will be harder, as you can see their docs don't offer a straightforward way like other pip installable C libraries.
So this will be complicated.
There are other alternative paths, like a commercial provider of this functionality (like I describe here https://medium.com/snowflake/new-in-snowflake-marketplace-monetization-315aa90b86c).
There other approaches to get this done without using a paid dataset, but I haven't written about that yet - but someone else might before I get to do it.
Btw, years ago I wrote something like this for BigQuery (https://cloud.google.com/blog/products/data-analytics/geolocation-with-bigquery-de-identify-76-million-ip-addresses-in-20-seconds), but today I was notified that Google recently deleted the tables that I had shared with the world (https://twitter.com/matthew_hensley/status/1598386009129058315).
So it's time to rebuild in Snowflake. But who (me?) and when is still a question.

Need Script for connecting cassandra with password and execute CQL using python

I need a script for which i need to connect CassDb nodes with password using python script
I tried bellow script
from cassandra.auth import PlainTextAuthProvider
from cassandra.cluster import Cluster
ap = PlainTextAuthProvider(username='##',password='##')
cass_contact_points=['cassdb01.p01.eng.sjc01.com', 'cassdb02.p01.eng.sjc01.com']
cluster = Cluster(cass_contact_points,auth_provider=ap,port=50126)
session = cluster.connect('##')
I'm getting bellow error:
File "C:\python35\lib\site-packages\cassandra\cluster.py", line 2792, in _reconnect_internal
raise NoHostAvailable("Unable to connect to any servers", errors)cassandra.cluster.NoHostAvailable: ('Unable to connect to any servers', {'10.44.67.92': OperationTimedOut('errors=None, last_host=None'), '10.44.67.91': OperationTimedOut('errors=None, last_host=None')})
I see two potential problems.
First, unless you're in a upgrade scenario or you've had past problems with protocol versions, I would not specify that. The driver should negotiate that value. Setting it to 2 will fail with Cassandra 3.x, for example.
Second, I don't think the driver can properly parse-out ports from the endpoints.
node_ips = ['cassdb01.p01.eng.sjc01.com:50126', 'cassdb02.p01.eng.sjc01.com:50126']
When I pass the port along with my endpoints, I get a similar failure. So I would try changing that line to this:
node_ips = ['cassdb01.p01.eng.sjc01.com', 'cassdb02.p01.eng.sjc01.com']
Try starting with that. I have some other thoughts, but let's get those two obvious settings out of the way.

Cloudera/CDH v6.1.x + Python HappyBase v1.1.0: TTransportException(type=4, message='TSocket read 0 bytes')

EDIT: This question and answer applies to anyone who is experiencing the exception stated in the subject line: TTransportException(type=4, message='TSocket read 0 bytes'); whether or not Cloudera and/or HappyBase is involved.
The root issue (as it turned out) stems from mismatching protocol and/or transport formats on the client-side with what the server-side is implementing, and this can happen with any client/server paring. Mine just happened to be Cloudera and HappyBase, but yours needn't be and you can run into this same issue.
Has anyone recently tried using the happybase v1.1.0 (latest) Python package to interact with Hbase on Cloudera CDH v6.1.x?
I'm trying various options with it, but keep getting the exception:
thriftpy.transport.TTransportException:
TTransportException(type=4, message='TSocket read 0 bytes')
Here is how I start a session and submit a simple call to get a listing of tables (using Python v3.6.7:
import happybase
CDH6_HBASE_THRIFT_VER='0.92'
hbase_cnxn = happybase.Connection(
host='vps00', port=9090,
table_prefix=None,
compat=CDH6_HBASE_THRIFT_VER,
table_prefix_separator=b'_',
timeout=None,
autoconnect=True,
transport='buffered',
protocol='binary'
)
print('tables:', hbase_cnxn.tables()) # Exception happens here.
And here is how Cloudera CDH v6.1.x starts the Hbase Thrift server (truncated for brevity):
/usr/java/jdk1.8.0_141-cloudera/bin/java [... snip ... ] \
org.apache.hadoop.hbase.thrift.ThriftServer start \
--port 9090 -threadpool --bind 0.0.0.0 --framed --compact
I've tried several variations to options, but getting nowhere.
Has anyone ever got this to work?
EDIT:
I next compiled Hbase.thrift (from the Hbase source files -- same HBase version as used by CDH v6.1.x) and used the Python thrift bindings package (in other words, I removed happybase from the equation) and got the same exception.
(._.);
Thank you!
After a days worth of working on this, the answer to my question is the following:
import happybase
CDH6_HBASE_THRIFT_VER='0.92'
hbase_cnxn = happybase.Connection(
host='vps00', port=9090,
table_prefix=None,
compat=CDH6_HBASE_THRIFT_VER,
table_prefix_separator=b'_',
timeout=None,
autoconnect=True,
transport='framed', # Default: 'buffered' <---- Changed.
protocol='compact' # Default: 'binary' <---- Changed.
)
print('tables:', hbase_cnxn.tables()) # Works. Output: [b'ns1:mytable', ]
Note that although this Q&A was framed in the context of Cloudera, it turns out (as you'll see) that this was Thrift versions and Thrift Server-Side configurations related, and so it applies to Hortonworks and MapR users, too.
Explanation:
On Cloudera CDH v6.1.x (and probably future versions, too) if you visit the Hbase Thrift Server Configuration section of its management U.I., you'll find -- among many other settings -- these:
Notice that compact protocol and framed transport are both enabled; so they correspondingly needed to be changed in happybase from its defaults (which I show above).
As mentioned in EDIT follow-up to my initial question, I also investigated a pure Thrift (non happybase) solution. And with analogous changes to Python code for that case, I got that to work, too. Here is the code you should use for the pure Thrift solution (taking care to read my commented annotations below):
from thrift.protocol import TCompactProtocol # Notice the import: TCompactProtocol [!]
from thrift.transport.TTransport import TFramedTransport # Notice the import: TFramedTransport [!]
from thrift.transport import TSocket
from hbase import Hbase
# -- This hbase module is compiled using the thrift(1) command (version >= 0.10 [!])
# and a Hbase.thrift file (obtained from http://archive.apache.org/dist/hbase/
# -- Also, your "pip freeze | grep '^thrift='" should show a version of >= 0.10 [!]
# if you want Python3 support.
(host,port) = ("vps00","9090")
transport = TFramedTransport(TSocket.TSocket(host, port))
protocol = TCompactProtocol.TCompactProtocol(transport)
client = Hbase.Client(protocol)
transport.open()
# Do stuff here ...
print(client.getTableNames()) # Works. Output: [b'ns1:mytable', ]
transport.close()
I hope this spares people the pain I went through. =:)
CREDITS:
Here (MapR) and
Here (Blog from China)
I also encountered this problem when I was using CDH 6.3.2 HBase recently. It is not enough to follow the above configuration. Also need to close hbase.regionserver.thrift.http and hbase.thrift.support.proxyuser in order to connect successfully

Connecting to a SQL db using python

I currently have a python script that can connect to a mySQL db and execute queries. I wish to modify it so that I can connect to a different SQL db to run a separate query. I am having trouble doing this, running osx 10.11. This is my first question and I'm a newbie programmer so please take it easy on me...
Here is the program i used to for mySQL
sf_username = "user"
sf_password = "pass"
sf_api_token = "token"
sf_update = beatbox.PythonClient()
password = str("%s%s" % (sf_password, sf_api_token))
sf_update.login(sf_username, password)
t = Terminal()
hub = [stuff]
def FINAL_RUN():
cnx = alternate_modify(hub)
cur = cnx.cursor()
queryinput = """
SQL QUERY I WANT
"""
cur.execute(queryinput)
rez = cur.fetchone()
while rez is not None:
write_to_sf(rez)
rez = cur.fetchone()
FINAL_RUN()
You can use a Python library called SQLAlchemy. It abstracts away the "low-level" operations you would do with a database (e.g. specifying queries manually).
A tutorial for using SQLAlchemy can be found here.
I was able to get connected using SQL Alchemy--thank you. If anyone else tries, I think you'll need a ODBC driver per this page:
http://docs.sqlalchemy.org/en/latest/dialects/mssql.html
Alternatively, pymssql is a nice tool. If you run into trouble installing like I did, there is a neat workaround here:
mac - pip install pymssql error
Thanks again

MySQL driver issues with INFORMATION_SCHEMA?

I'm trying out the Concurrence framework for Stackless Python. It includes a MySQL driver and when running some code that previously ran fine with MySQLdb it fails.
What I am doing:
Connecting to the MySQL database using dbapi with username/password/port/database.
Executing SELECT * FROM INFORMATION_SCHEMA.COLUMNS
This fails with message:
Table 'mydatabase.columns' doesn't exist
"mydatabase" is the database I specified in step 1.
When doing the same query in the MySQL console after issuing "USE mydatabase", it works perfectly.
Checking the network communication yields something like this:
>>>myusername
>>>scrambled password
>>>mydatabase
>>>CMD 3 SET AUTOCOMMIT = 0
<<<0
>>>CMD 3 SELECT * FROM INFORMATION_SCHEMA.COLUMNS
<<<255
<<<Table 'mydatabase.columns' doesn't exist
Is this a driver issue (since it works in MySQLdb)? Or am I not supposed to be able to query INFORMATION_SCHEMA this way?
If I send a specific "USE INFORMATION_SCHEMA" before trying to query it, I get the expected result. But, I do not want to have to sprinkle my code all over with "USE" queries.
It definitely looks like a driver issue. Maybe the python driver don't support the DB prefix.
Just to be sure, try the other way around: first use INFORMATION_SCHEMA and then SELECT * FROM mydatabase.sometable
I finally found the reason.
The driver just echoed the server capability flags back in the protocol handshake, with the exception of compression:
## concurrence/database/mysql/client.py ##
client_caps = server_caps
#always turn off compression
client_caps &= ~CAPS.COMPRESS
As the server has the capability...
CLIENT_NO_SCHEMA 16 /* Don't allow database.table.column */
...that was echoed back to the server, telling it not to allow that syntax.
Adding client_caps &= ~CAPS.NO_SCHEMA did the trick.

Categories

Resources