Python connect to sql server using AES decrypted password - python

I have an application (not mine) which has config file with an encrypted password and I guess it uses AES.
I would like to write a small python code to test connection using this encrypted password to make sure I can use it.
But I do not know how to use the encrypted password to the code
conn = pyodbc.connect('Driver={SQL Server};'
'Server=192.168.10.1;'
'Database=AUIS;'
'uid=sa;'
'pwd=BR+vNRCyv0pxHF97Aad2JA==;')
cursor = conn.cursor()
cursor.execute('SELECT * FROM AUIS.Table')
for row in cursor:
print(row)````

Try to add ColumnEncryption
conn = pyodbc.connect('Driver={SQL Server};'
'Server=192.168.10.1;'
'Database=AUIS;'
'uid=sa;'
'pwd=BR+vNRCyv0pxHF97Aad2JA==;'
'ColumnEncryption=Enabled;')

Related

Download files from bytea column in postgresql with python

Hi,
Can I download a file from my database from a column type bytea in python?
I'm trying to do with the psycopg2, I upload a .txt file but when I tried to retrieved it to my local machine, it just save a .txt file with non-readable data, the txt file starts like this "U0ZUUCBhZGRy...." so looks like bytes info, the same as the DB saves.
An screenshot of the DB in dbeaver example_dbeaver_column
This is the code I used.
import psycopg2
connection = psycopg2.connect(dbname=dbname,
host=host,
port=port,
user=user,
password=password)
# get cursor
cursor = connection.cursor()
query = "select c.file from my_table t where t.file_name = 'credentials.txt'"
cursor.execute(query)
data = cursor.fetchall()
file_binary=data[0][0].tobytes()
with open('my_text.txt','wb') as file:
file.write(file_binary)
Any ideas how I can solve this problem?
Thanks for your help
I found it!
In postgre it's encoded in base64 so I need to decoded with base64 library.
import base64
import psycopg2
connection = psycopg2.connect(dbname=dbname,
host=host,
port=port,
user=user,
password=password)
# get cursor
cursor = connection.cursor()
query = "select c.file from my_table t where t.file_name = 'credentials.txt'"
cursor.execute(query)
data = cursor.fetchall()
file_binary=data[0][0].tobytes()
with open('my_text.txt','wb') as file:
file.write(base64.b64decode(file_binary))

SQL connection string into function

I am trying to put a connection string into a function for database connection:
conn = pyodbc.connect('Driver={SQL Server Native Client 11.0};'
'Server=trve-tl'
'Database=team;'
'Trusted_Connection=yes;')
with open("bike.sql", "r") as file:
query = file.read()
I have attempted to create a function as follows:
def get_connection(server:str, Database:str)->str:
global conn = pyodbc.connect('Driver={SQL Server Native Client 11.0};'
'Server='+server+';'
'Database='+Database+';'
'Trusted_Connection=yes;')
return conn
I get the following error:
global conn = pyodbc.connect('Driver={SQL Server Native Client 11.0};'
^
SyntaxError: invalid syntax
You need to remove the 'global' keyword from your conn variable declaration, then store the output of the function in a new variable.
def get_connection(server:str, Database:str)->str:
conn = pyodbc.connect('Driver={SQL Server Native Client 11.0};'
'Server='+server+';'
'Database='+Database+';'
'Trusted_Connection=yes;')
return conn
then call it something like this
cn = get_connection("localhost","MyDB")
cn.close()

Python script hangs when trying to connect to SQL databse

Python script hangs when I run it. Not sure what is getting it stuck. Just had a battle with having to install new ODBC Drivers for postgres, could that be the reason?
import pyodbc
server = 'localhost'
database = 'screens_database'
username = 'postgres'
password = ''
driver='{ODBC Driver 17 for SQL Server}'
cxcn = pyodbc.connect('DRIVER='+driver+';SERVER='+server+';PORT=5000;DATABASE='+database+';UID='+username+';PWD='+password)
#cursor
cur = con.cursor()
cur.execute("select * from screens_database")
con.close()

Getting error on python while transferring data from SQL server to snowflake

I am getting below error
query = command % processed_params TypeError: not all arguments
converted during string formatting
I am trying to pull data from SQL server and then inserting it into Snowflake
my below code
import pyodbc
import sqlalchemy
import snowflake.connector
driver = 'SQL Server'
server = 'tanmay'
db1 = 'testing'
tcon = 'no'
uname = 'sa'
pword = '123'
cnxn = pyodbc.connect(driver='{SQL Server}',
host=server, database=db1, trusted_connection=tcon,
user=uname, password=pword)
cursor = cnxn.cursor()
cursor.execute("select * from Admin_tbldbbackupdetails")
rows = cursor.fetchall()
#for row in rows:
# #data = [(row[0], row[1],row[2], row[3],row[4], row[5],row[6], row[7])]
print (rows[0])
cnxn.commit()
cnxn.close()
connection = snowflake.connector.connect(user='****',password='****',account='*****')
cursor2 = connection.cursor()
cursor2.execute("USE WAREHOUSE FOOD_WH")
cursor2.execute("USE DATABASE Test")
sql1="INSERT INTO CN_RND.Admin_tbldbbackupdetails_ip"
"(id,dbname, dbpath, backupdate, backuptime, backupStatus, FaildMsg, Backupsource)"
"values (?,?,?,?,?,?,?,?)"
cursor2.execute(sql1,*rows[0])
It's obviously string parsing error.
You missed to provide parameter to %s printout.
If you cannot fix it step back and try another approach.
Use another script to achieve the same and get back to you bug tomorrow :-)
My script is doing pretty much the same:
1. Connect to SQL Server
-> fetchmany
-> multipart upload to s3
-> COPY INTO Snowflake table
Details are here: Snowpipe-for-SQLServer

Python to SQL Server Insert

I'm trying to follow the method for inserting a Panda data frame into SQL Server that is mentioned here as it appears to be the fastest way to import lots of rows.
However I am struggling with figuring out the connection parameter.
I am not using DSN , I have a server name, a database name, and using trusted connection (i.e. windows login).
import sqlalchemy
import urllib
server = 'MYServer'
db = 'MyDB'
cxn_str = "DRIVER={SQL Server Native Client 11.0};SERVER=" + server +",1433;DATABASE="+db+";Trusted_Connection='Yes'"
#cxn_str = "Trusted_Connection='Yes',Driver='{ODBC Driver 13 for SQL Server}',Server="+server+",Database="+db
params = urllib.parse.quote_plus(cxn_str)
engine = sqlalchemy.create_engine("mssql+pyodbc:///?odbc_connect=%s" % params)
conn = engine.connect().connection
cursor = conn.cursor()
I'm just not sure what the correct way to specify my connection string is. Any suggestions?
I have been working with pandas and SQL server for a while and the fastest way I found to insert a lot of data in a table was in this way:
You can create a temporary CSV using:
df.to_csv('new_file_name.csv', sep=',', encoding='utf-8')
Then use pyobdc and BULK INSERT Transact-SQL:
import pyodbc
conn = pyodbc.connect(DRIVER='{SQL Server}', Server='server_name', Database='Database_name', trusted_connection='yes')
cur = conn.cursor()
cur.execute("""BULK INSERT table_name
FROM 'C:\\Users\\folders path\\new_file_name.csv'
WITH
(
CODEPAGE = 'ACP',
FIRSTROW = 2,
FIELDTERMINATOR = ',',
ROWTERMINATOR = '\n'
)""")
conn.commit()
cur.close()
conn.close()
Then you can delete the file:
import os
os.remove('new_file_name.csv')
It was a second to charge a lot of data at once into SQL Server. I hope this gives you an idea.
Note: don't forget to have a field for the index. It was my mistake when I started to use this lol.
Connection string parameter values should not be enclosed in quotes so you should use Trusted_Connection=Yes instead of Trusted_Connection='Yes'.

Categories

Resources