Hiding Password While Using sqalchemy.create_engine in python/sql server - python

I am importing a table created in python to my companies sql server database and would like to hide my sql server credentials.
The code I currently have is this:
engine = sqlalchemy.create_engine(
"mssql+pyodbc://trevor#email.com:mypassword!#dsn"
"?authentication=ActiveDirectoryPassword"
)
df.to_sql('df', con = engine, schema= 'dbo', if_exists='replace', index=False)
This currently works perfectly but I would like to hide my password as this code will live on our companies virtual machine so it can run automatically everyday. Any tips would be appreciated
I tried using keyring as such:
import keyring
creds = keyring.get_credential(service_name= "sqlupload", username = None)
username_var = creds.username
password_var = creds.password
and then replacing 'mypassword' with password_var
engine = sqlalchemy.create_engine(
"mssql+pyodbc://trevor#email.com:password_var#dsn"
"?authentication=ActiveDirectoryPassword"
)
df.to_sql('df', con = engine, schema= 'dbo', if_exists='replace', index=False)
I get the following error:
Error validating credentials due to invalid username or password
I believe this is because "mssql+pyodbc://trevor#email.com:password_var#dsn" is in quotes and password_var is not being read as a variable.

You can do a string formatter in python.
password_var = "mypassword!"
print(f'mssql+pyodbc://trevor#email.com:{password_var}#dsn')
Result:
mssql+pyodbc://trevor#email.com:mypassword!#dsn
Reference: https://docs.python.org/3/tutorial/inputoutput.html

Related

Getting sqlalchemy.exc.InvalidRequestError when loading data from Excel to Snowflake

I am trying to load data from excel to Snowflake using Python.
Below is my code so far:
config = ConfigParser()
# parse ini file
config.read('config.ini')
## Read excel
file = '/path/INTRANSIT.xlsx'
df_excel = pd.read_excel(file, engine='openpyxl')
# sqlalchemy to create DB engine
engine = create_engine(URL(
account = config.get('Production', 'accountname'),
user = config.get('Production', 'username'),
password = config.get('Production', 'password'),
database = config.get('Production', 'dbname'),
schema = config.get('Production', 'schemaname'),
warehouse = config.get('Production', 'warehousename'),
role=config.get('Production', 'rolename'),
)
)
con = engine.connect()
df_excel.to_sql('transit_table',con, if_exists='replace', index=False)
con.close()
But I am getting below error:
sqlalchemy.exc.InvalidRequestError: Could not reflect: requested table(s) not available in Engine(snowflake://username:***#account_identifier/): (db.schema.transit_table)
I have tried prefixing Database and schema to table and also tried passing table name alone. I have also tried passing uppercase and lowercase table name.
Still not able to resolve this error. Would really appreciate any help to resolve this!
Thank you.

SQLAlchemy and the sql 'Use Database' command

I am using sqlalchemy and the create_engine to connect to mysql, build a database and start populating with relevant data.
edit, to preface, the database in question needs to be first created. to do this I perform the following commands
database_address = 'mysql+pymysql://{0}:{1}#{2}:{3}'
database_address = database_address.format('username',
'password',
'ip_address',
'port')
engine = create_engine(database_address, echo=False)
Database_Name = 'DatabaseName'
engine.execute(("Create Databse {0}").format(Database_Name)
Following creation of the database, I try to perform a 'use' command but end up receiving the following error
line 3516, in _escape_identifier
value = value.replace(self.escape_quote, self.escape_to_quote)
AttributeError: 'NoneType' object has no attribute 'replace'
I traced the error to a post which stated that this occurs when using the following command in python 3
engine.execute("USE dbname")
What else needs to be included in the execute command to access the mysql database and not throw an error.
You shouldn't use the USE command - instead you should specify the database you wish to connect to in the create_engine connection url - i.e.:
database_address = 'mysql+pymysql://{0}:{1}#{2}:{3}/{4}'
database_address = database_address.format('username',
'password',
'ip_address',
'port',
'database')
engine = create_engine(database_address, echo=False)
The create_engine docs are here: https://docs.sqlalchemy.org/en/13/core/engines.html#mysql
Figured out what to do,
Following advise from Match, I looked into SQLAlchemy and it's ability to create schema.
Found the following code
from sqlalchemy import create_engine
from sqlalchemy_utils import database_exists, create_database
database_address = 'mysql+pymysql://{0}:{1}#{2}:{3}/{4}?charset=utf8mb4'
database_address = database_address.format('username','password','address','port','DB')
engine = create_engine(database_address, echo=False)
if not database_exists(self.engine.url):
create_database(self.engine.url)
So creating an engine with the Schema name identified, I can use the utility database_exists to see if the database does exist, if not, then create using the create_database function.

Flask API facing InterfaceError with PostgreSQL

I have a Flask API based on Flask RestPlus extension and is hosted on Google App Engine. The API does a basic job of fetching data from a Google Cloud SQL PostgreSQL. The API is working fine otherwise but sometimes it starts returning InterfaceError: cursor already closed.
Strangely, when I do a gcloud app deploy, the API starts working fine again.
Here's a basic format of the API:
import simplejson as json
import psycopg2
from flask import Flask, jsonify
from flask_restplus import Api, Resource, fields
from psycopg2.extras import RealDictCursor
app = Flask(__name__)
app.config['SWAGGER_UI_JSONEDITOR'] = True
api = Api(app=app,
doc='/docs',
version="1.0",
title="Title",
description="description")
app.config['SWAGGER_UI_JSONEDITOR'] = True
ns_pricing = api.namespace('cropPricing')
db_user = "xxxx"
db_pass = "xxxx"
db_name = "xxxxx"
cloud_sql_connection_name = "xxxxxx"
conn = psycopg2.connect(user=db_user,
password=db_pass,
host='xxxxx',
dbname=db_name)
#ns_pricing.route('/list')
class States(Resource):
def get(self):
"""
list all the states for which data is available
"""
cur = conn.cursor(cursor_factory=RealDictCursor)
query = """
SELECT
DISTINCT state
FROM
db.table
"""
conn.commit()
cur.execute(query)
states = json.loads(json.dumps(cur.fetchall()))
if len(states) == 0:
return jsonify(data=[],
status="Error",
message="Requested data not found")
else:
return jsonify(status="Success",
message="Successfully retreived states",
data=states)
What should I fix to not see the error anymore?
It would be good to use the ORMs such as SQLAlchemy / Flask-SQLAlchemy which would handle the establishing / re-establishing the connection part.
Though, if using psycopg2. you can use try except to catch the exception and re-establish the connection again.
try:
cur.execute(query)
except psycopg2.InterfaceError as err:
print err.message
conn = psycopg2.connect(....)
cur = conn.cursor()
cur.execute(query)

How can I change password for sql login to database using pyodbc?

I try change password for sql login to database using pyodbc in python
But I get error - Wrong syntax near object "#P1" (102) (SQLExecDirectW).....Cannot prepare the instruction (8180)
config.login = 'user'
config.haslo = '12345'
haslo = 'abcde'
con = pyodbc.connect("DRIVER={ODBC Driver 11 for SQL Server};"
"SERVER=Serwer;"
"DATABASE=Baza;"
"UID="+config.login+";"
"PWD="+config.haslo+";"
"autocommit=true")
kursor = con.cursor()
zapytanie = """ALTER LOGIN ? with password = ? old_password = ?"""
val = (config.login, haslo, config.haslo)
kursor.execute(zapytanie, val)
kursor.commit()
kursor.close()
del kursor
SQL Server ODBC apparently does not support parameterization of an ALTER LOGIN statement. Instead of using this ...
uid = 'bubba'
old_pwd = 'NASCAR'
new_pwd = 'GRITS'
sql = "ALTER LOGIN ? WITH password ? old_password ?"
crsr.execute(sql, uid, new_pwd, old_pwd)
... you will need to do something like this:
uid = 'bubba'
old_pwd = 'NASCAR'
new_pwd = 'GRITS'
sql = f"ALTER LOGIN {uid} WITH PASSWORD = '{new_pwd}' OLD_PASSWORD = '{old_pwd}'"
crsr.execute(sql)
IMPORTANT - As with all dynamic SQL, this is potentially vulnerable to SQL injection issues. Be sure to sanitize the login_id and password values!
(Long-time users of SQL Server may recall that there is a system stored procedure named sp_password but the documentation indicates that it is deprecated.)

How to run a BigQuery query in Python

This is the query that I have been running in BigQuery that I want to run in my python script. How would I change this/ what do I have to add for it to run in Python.
#standardSQL
SELECT
Serial,
MAX(createdAt) AS Latest_Use,
SUM(ConnectionTime/3600) as Total_Hours,
COUNT(DISTINCT DeviceID) AS Devices_Connected
FROM `dataworks-356fa.FirebaseArchive.testf`
WHERE Model = "BlueBox-pH"
GROUP BY Serial
ORDER BY Serial
LIMIT 1000;
From what I have been researching it is saying that I cant save this query as a permanent table using Python. Is that true? and if it is true is it possible to still export a temporary table?
You need to use the BigQuery Python client lib, then something like this should get you up and running:
from google.cloud import bigquery
client = bigquery.Client(project='PROJECT_ID')
query = "SELECT...."
dataset = client.dataset('dataset')
table = dataset.table(name='table')
job = client.run_async_query('my-job', query)
job.destination = table
job.write_disposition= 'WRITE_TRUNCATE'
job.begin()
https://googlecloudplatform.github.io/google-cloud-python/stable/bigquery-usage.html
See the current BigQuery Python client tutorial.
Here is another way using a JSON file for the service account:
>>> from google.cloud import bigquery
>>>
>>> CREDS = 'test_service_account.json'
>>> client = bigquery.Client.from_service_account_json(json_credentials_path=CREDS)
>>> job = client.query('select * from dataset1.mytable')
>>> for row in job.result():
... print(row)
This is a good usage guide:
https://googleapis.github.io/google-cloud-python/latest/bigquery/usage/index.html
To simply run and write a query:
# from google.cloud import bigquery
# client = bigquery.Client()
# dataset_id = 'your_dataset_id'
job_config = bigquery.QueryJobConfig()
# Set the destination table
table_ref = client.dataset(dataset_id).table("your_table_id")
job_config.destination = table_ref
sql = """
SELECT corpus
FROM `bigquery-public-data.samples.shakespeare`
GROUP BY corpus;
"""
# Start the query, passing in the extra configuration.
query_job = client.query(
sql,
# Location must match that of the dataset(s) referenced in the query
# and of the destination table.
location="US",
job_config=job_config,
) # API request - starts the query
query_job.result() # Waits for the query to finish
print("Query results loaded to table {}".format(table_ref.path))
I personally prefer querying using pandas:
# BQ authentication
import pydata_google_auth
SCOPES = [
'https://www.googleapis.com/auth/cloud-platform',
'https://www.googleapis.com/auth/drive',
]
credentials = pydata_google_auth.get_user_credentials(
SCOPES,
# Set auth_local_webserver to True to have a slightly more convienient
# authorization flow. Note, this doesn't work if you're running from a
# notebook on a remote sever, such as over SSH or with Google Colab.
auth_local_webserver=True,
)
query = "SELECT * FROM my_table"
data = pd.read_gbq(query, project_id = MY_PROJECT_ID, credentials=credentials, dialect = 'standard')
The pythonbq package is very simple to use and a great place to start. It uses python-gbq.
To get started you would need to generate a BQ json key for external app access. You can generate your key here.
Your code would look something like:
from pythonbq import pythonbq
myProject=pythonbq(
bq_key_path='path/to/bq/key.json',
project_id='myGoogleProjectID'
)
SQL_CODE="""
SELECT
Serial,
MAX(createdAt) AS Latest_Use,
SUM(ConnectionTime/3600) as Total_Hours,
COUNT(DISTINCT DeviceID) AS Devices_Connected
FROM `dataworks-356fa.FirebaseArchive.testf`
WHERE Model = "BlueBox-pH"
GROUP BY Serial
ORDER BY Serial
LIMIT 1000;
"""
output=myProject.query(sql=SQL_CODE)

Categories

Resources