How to execute query formed by python-sql query builder? - python

I am using the python-sql query builder to build queries. Below is the link:
https://pypi.org/project/python-sql/
How do I execute queries from this query builder? Below is an example:
user = Table('user')
select = user.select()
tuple(select)
('SELECT * FROM "user" AS "a"', ())
How to execute this in python?

It seems that python-sql only returns a tuple with the SQL string and a list of parameters. It does not execute anything. You can execute the generated code using pyodbc or other library, for example, for SQL Server:
import pyodbc
from sql import *
conn = pyodbc.connect("Driver={SQL Server Native Client 11.0};"
"Server=YourServer;"
"Database=Your database;"
"Trusted_Connection=yes;")
cursor = conn.cursor()
user = Table('user')
select = user.select()
cursor.execute(select[0], select[1])
for row in cursor:
print('row = %r' % (row,))
For other database system just change the driver name, etc...

Related

How to restore MySQL database using Python

I have a sql file generated during database backup process and I want to load all database content from that sql file to a different MySQL database (secondary database).
I have created a python function to load the whole database in that sql file but when I execute the function, I get an error
'str' object is not callable
Below is python script
def load_database_dump_to_secondary_mysql(file_path='db_backup_file.sql'):
query = f'source {file_path}'
try:
connection = mysql_hook.get_conn() # connection to secondary db
cursor = connection.cursor(query)
print('LOAD TO MYSQL COMPLETE')
except Exception as xerror:
print("LOAD ERROR: ", xerror)
NB: mysql_hook is an airflow connector that contains MySQL DB connection info such as Host, user/passwd, Database name. Also, I don't have connection to the primary database, I'm only receiving sql dump file.
What I'm I missing?
source is a client builtin command: https://dev.mysql.com/doc/refman/8.0/en/mysql-commands.html
It's not an SQL query that MySQL's SQL parser understands.
So you can't execute source using cursor.execute(), because that goes directly to the dynamic SQL interface.
You must run it using the MySQL command-line client as a subprocess:
subprocess.run(['mysql', '-e', f'source {file_path}'])
You might need other options to the mysql client, such as user, password, host, etc.
try this
import mysql.connector as m
# database which you want to backup
db = 'geeksforgeeks'
connection = m.connect(host='localhost', user='root',
password='123', database=db)
cursor = connection.cursor()
# Getting all the table names
cursor.execute('SHOW TABLES;')
table_names = []
for record in cursor.fetchall():
table_names.append(record[0])
backup_dbname = db + '_backup'
try:
cursor.execute(f'CREATE DATABASE {backup_dbname}')
except:
pass
cursor.execute(f'USE {backup_dbname}')
for table_name in table_names:
cursor.execute(
f'CREATE TABLE {table_name} SELECT * FROM {db}.{table_name}')

Connect to SQL Server and run query as "passthrough" from Python

I currently have code that executes queries on data stored on a SQL Server database, such as the following:
import pyodbc
conn = pyodbc.connect(
r'DRIVER={SQL Server};'
r'SERVER=SQL2SRVR;'
r'DATABASE=DBO732;'
r'Trusted_Connection=yes;'
)
sqlstr = '''
SELECT Company, Street_Address, City, State
FROM F556
WHERE [assume complicated criteria statement here]
'''
crsr = conn.cursor()
for row in crsr.execute(sqlstr):
print(row.Company, row.Street_Address, row.City, row.State)
I can't find documentation online of whether pyodbc can (or is by default) running my queries on the SQL Server (as passthrough queries), or whether (if pyodbc can't do that) there is another way (maybe sqlalchemy or similar?) of doing that. Any insight?
Or is there a way to execute passthrough queries directly from Pandas?
If you are working with pandas and SQL Server then you should already have created a SQLAlchemy Engine object (usually named engine). To execute a raw DML statement you can use the construct
with engine.begin() as conn:
conn.execute("UPDATE table_name SET column_name ...")
print("table updated")

Getting error on python while transferring data from SQL server to snowflake

I am getting below error
query = command % processed_params TypeError: not all arguments
converted during string formatting
I am trying to pull data from SQL server and then inserting it into Snowflake
my below code
import pyodbc
import sqlalchemy
import snowflake.connector
driver = 'SQL Server'
server = 'tanmay'
db1 = 'testing'
tcon = 'no'
uname = 'sa'
pword = '123'
cnxn = pyodbc.connect(driver='{SQL Server}',
host=server, database=db1, trusted_connection=tcon,
user=uname, password=pword)
cursor = cnxn.cursor()
cursor.execute("select * from Admin_tbldbbackupdetails")
rows = cursor.fetchall()
#for row in rows:
# #data = [(row[0], row[1],row[2], row[3],row[4], row[5],row[6], row[7])]
print (rows[0])
cnxn.commit()
cnxn.close()
connection = snowflake.connector.connect(user='****',password='****',account='*****')
cursor2 = connection.cursor()
cursor2.execute("USE WAREHOUSE FOOD_WH")
cursor2.execute("USE DATABASE Test")
sql1="INSERT INTO CN_RND.Admin_tbldbbackupdetails_ip"
"(id,dbname, dbpath, backupdate, backuptime, backupStatus, FaildMsg, Backupsource)"
"values (?,?,?,?,?,?,?,?)"
cursor2.execute(sql1,*rows[0])
It's obviously string parsing error.
You missed to provide parameter to %s printout.
If you cannot fix it step back and try another approach.
Use another script to achieve the same and get back to you bug tomorrow :-)
My script is doing pretty much the same:
1. Connect to SQL Server
-> fetchmany
-> multipart upload to s3
-> COPY INTO Snowflake table
Details are here: Snowpipe-for-SQLServer

Run the query saved in MS Access with required parameters through Pyodbc?

I am using Pyodbc to connect my program with MS Access. In the Access database, I pre-created some queries that require parameters. How can I pass values to parameters of the queries when executing them in Python?
When an Access database contains saved parameter queries they are exposed by Access ODBC as stored procedures and can be invoked using the ODBC {call ...} syntax. For example, with a saved query named [ClientEmails] ...
PARAMETERS prmLastName Text ( 255 );
SELECT Clients.ID, Clients.LastName, Clients.FirstName, Clients.Email
FROM Clients
WHERE (((Clients.LastName)=[prmLastName]));
... the following Python code will run that query and return results for a specific Last Name:
cmd = "{call ClientEmails(?)}"
params = ("Thompson",)
crsr.execute(cmd, params) # pyodbc "cursor" object
for row in crsr.fetchall():
print(row)
Here's a generalized example. First, connect to the database. Then, issue commands. The command is just a string. You can incorporate variables from elsewhere in your code through simple string concatenation.
import pyodbc
connStr = """
DRIVER={Microsoft Access Driver (*.mdb, *.accdb)};
DBQ=C:\full\path\to\your\PYODBC.accdb;
"""
cnxn = pyodbc.connect(connStr)
cursor = cnxn.cursor()
desired_column = "Forename"
table_name = "Student"
command = "SELECT " + desired_column + " FROM " + table_name
cursor.execute(command)
row = cursor.fetchone()
if row:
print(row)

how can I use stored procedure with parameters in python code

I want to use stored procedure in python code like below.
import pyodbc
conn = pyodbc.connect('Trusted_Connection=yes', driver = '{SQL Server}',
server = 'ZAMAN\SQLEXPRESS', database = 'foy3')
def InsertUser(studentID,name,surname,birth,address,telephone):
cursor = conn.cursor()
cursor.execute("exec InserttoDB studentID,name,surname,birth,address,telephone")
rows = cursor.fetchall()
I have a problem below part of code. How can I send function parametters to DB with InserttoDB (stored procedure)
cursor.execute("exec InserttoDB studentID,name,surname,birth,address,telephone")
I am not sure what database you are using but I think this should do the job.
cursor.execute("call SP_YOUR_SP_NAME(params)")

Categories

Resources